Helmut Hlavacs
University of Vienna
Teaching Video Game Technologies
Date: November 20th, 2024, 2pm CET
Abstract: Computer games are complex software projects, requiring a delicate combination of software engineering, real-time techniques, smart tools, artificial intelligence, physics simulation, databases, art, music and many more. In our computer science curriculum at the Faculty of Computer Science at the University of Vienna we teach introductory courses on specialized topics around computer games. The main focus is on technology, even though there is one course on game design. In this talk I will discuss the basics of our technologically oriented courses, which include Real-Time Computer Graphics, a course focused on the Vulkan API, Real-Time Ray Tracing focusing on ray tracing with the Vulkan API, Gaming Technologies, focusing on AI for computer games, physics simulation, parallelism and data-oriented design, and Cloud Gaming, a mix of real-time networking and hands- video encoding. All courses are based on the Vulkan API, and students must program a simple game based on the course objectives.
About the Speaker: Dr. Helmut Hlavacs is Full Professor for Computer Science at the University of Vienna, Austria (ORCID: orcid.org/0000-0001-6837-674X). Dr. Hlavacs has a PhD in Mathematics (2001), and an MSc in Mathematics (1993). He also was awarded a Habilitation in the area of Applied Computer Science (2004). He currently heads the Education, Didactics, and Entertainment Computing (EDEN) research group, focusing here on technical aspects of computer games, game engines and C++, rendering with the Vulkan API, as well as application of computer games and virtual reality for well-being, health, and therapy through games and VR applications for clinical psychology. In his career he received various prices, including 11 best paper awards, an exhibitor award for one of his games, an eAward for one of his projects, a main award for communication on oncology, featuring a gamified medical diary, and many more. He is author and co-author of more than 280 peer reviewed articles, published at international conferences and in high-impact journals.
Benjamin Jarvis
École Polytechnique Fèderalé de Lausanne
Embodied Teleoperation: Enabling Intuitive Control of Complex Robots
Date: November 12th, 2024, 1:45pm CET
Room: HS 1 / Zoom, opens an external URL in a new window
Abstract: Teleoperation of robotic systems has been shown to improve with high levels of embodiment, particularly when operators experience an intuitive connection between their movements and the actions of the robot. This seminar explores advanced teleoperation interfaces, developed in the Laboratory of Intelligent Systems at EPFL, that enhance operator immersion and control across diverse robots. Wearable interfaces, such as the FlyJacket exoskeleton, enable users to control aerial robots through natural body motions, providing both kinesthetic and tactile feedback to improve awareness and precision while reducing strain. Personalized Body-Machine Interfaces (BoMIs) adapt to users' unique motor synergies, facilitating seamless control across different robot morphologies. Additionally, ongoing research examines First-Person View (FPV) perspectives for teleoperating aerial swarms, addressing the challenge of creating coherent perspectives within distributed systems to enhance engagement and task performance. Together, these technologies illustrate how immersive, body-driven teleoperation interfaces are advancing the future of intuitive and accessible robotic control.
About the Speaker: Benjamin Jarvis holds a combined Bachelor's Degree in Engineering Honours (Aeronautical (Space)) and Science (Advanced (Physics)) from the University of Sydney, where he graduated with First Class Honours. His early research experience includes contributions as a Visiting Researcher at NASA Jet Propulsion Laboratory, where he worked on shape reconstruction and landing assessments for small celestial bodies. From 2021 to 2023, he served as a Research Affiliate at the University of Sydney, contributing to the development of a star tracker for small satellites. In 2023, Benjamin began his Ph.D. in Robotics at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. In the laboratory of Intelligent Systems he researches methods for human perception and control of aerial swarms. Developing interfaces that allow a single pilot to achieve intuitive and embodied control of distributed swarms.
Jian Peng
UFZ - Helmholtz Centre for Environmental Research
Land surface water dynamics from a model-data fusion perspective
Date: October 23rd, 2024, 2:00 pm CEST
Abstract: Land surface water and energy fluxes are essential components of the Earth system. The temporal and spatial variability of water and energy fluxes are determined by complex interactions between the land surface and the atmosphere. The interactions are determined by the complexity of different landscape compartments such as soil, vegetation and topography. Remote sensing provides substantial opportunities to acquire complex water cycle information continuously in time and space. Further integration of remote sensing data with Earth system models will help improve the accuracy of global and climate change predictions and assist in the development of research strategies for change mitigation and adaptation and sustainable transformation. Based on the model-data fusion approach, we aim to develop a coherent monitoring and modelling framework for terrestrial fluxes estimation across scales. It maximizes the usage of multi-source satellite observations (optical, thermal infrared and microwave) that allow to quantify soil moisture and evaporation dynamics at different spatiotemporal scales. The aim of this presentation is to introduce our recent work on the quantification of high-resolution water cycle variables and to provide insights on how these products can improve the understanding of land-atmosphere interactions and hydro-climatic extremes.
About the Speaker: He is an Earth system scientist and Head of the Department of Remote Sensing at the UFZ in Leipzig, opens an external URL in a new window. He is also a full professor for Hydrology and Remote Sensing at the University of Leipzig. His research interests are the quantitative retrieval of land surface parameters from remote sensing data, the assimilation of remote sensing data into climate and land surface process models, understanding land-atmosphere interactions using earth system models and observational data, and quantification of climate change impact on water resources. His research in particular focuses on estimation of high-resolution land surface water and energy fluxes from satellite observations, and the investigation of hydrological and climatic extremes as well as their impacts on ecosystems. He has been involved in various national and international research projects funded by e.g., ESA, EU, UK space agency, NERC, and DFG. He has received numerous international awards, most recently in 2019 the Remote Sensing Young Investigator Award of the Swiss scientific publisher MDPI. He is Editor-in-Chief of the journal " Geoscience Data Journal".
Robert Krueger
New York University
Scalable Visual Analytics for Digital Cancer Pathology
Date: June 19th, 2024, 2:00 pm CEST
Room: S3 055 / Zoom, opens an external URL in a new window
Abstract: With new tumor imaging technologies, cancer biology has entered a digital era. Artificial intelligence has enabled the processing and analysis of imaging data at unprecedented scale. While processing pipelines are rapidly evolving, pre-clinical research performed with the data is highly experimental and exploratory in nature, making integration of biomedical experts essential to steer workflows and interpret results.
In my talk, I will introduce a scalable rendering framework enabling users to load, display, and interactively navigate terabyte-sized multiplexed images of cancer tissue. I will then present visual analytics interfaces that build on this framework and support cell biologists and pathologists in their workflows. By leveraging both unsupervised and supervised learning in an interactive setting, cells in the tissue can be iteratively classified into tumor, immune, and stromal cell type hierarchies. Subsequently, spatial neighborhoods of cells are quantified in order to query and cluster reoccurring, biologically-meaningful cellular interactions both in and across specimens. Once relevant biological patterns are identified, a novel focus and context lensing interface enables pathologists to further assess and annotate these regions of interest in an intuitive fashion. I will conclude with an outlook into my future research agenda, addressing the transition to volumetric and time-varying datasets, detailed analysis of cell-cell interaction profiles in high-resolution 3D data, and the joint exploration of multimodal images with increasing amounts of spatially-referenced sequencing data.
About the Speaker: Robert Krueger is an assistant professor at New York University (NYU) - Department of Computer Science and Engineering, and a member of VIDA, the NYU Visualization Imaging and Data Analysis Center. Krueger's research lies in the field of data visualization and visual analytics for spatial and spatially-referenced multivariate data with a focus on biomedical visualization. He is the lead organizer of the transatlantic Visualization and Image Data Management Group (VIM) and a co-organizer of the Spatial Biology Association (SBA) where he has been closely collaborating with leading biologists and oncologists in the field of computational pathology and cancer research.
Previously, Dr. Krueger was a postdoctoral fellow and subgroup leader at the Visual Computing Group (VCG), School of Engineering and Applied Sciences at Harvard University, and a senior research scientist at the Laboratory of Systems Pharmacology, Harvard Medical School. Dr. Krueger received his Ph.D. degree (Dr. rer. nat.) in Computer Science at the Institute for Visualization and Interactive Systems, University of Stuttgart in 2017. Krueger's work is published in leading visualization journals including Transaction on Visualization Computer Graphics (TVCG) and the Computer Graphics Forum (CGF) as well as in biological journals including Cell and Nature Methods.
Michael Behrisch
Utrecht University
Human-in-the-(Exploration-)Loop: Visual Pattern-Driven Exploration of Big Datasets
Date: May 22nd, 2024, 2:00 pm CEST
Abstract: Visual Analytics (VA) is the science of analytical reasoning in big and complex datasets facilitated by interactive visual interfaces. Computers are capable of processing enormous amounts of data while humans can creatively pursue their analytical tasks by incorporating their general knowledge. VA systems unite these strengths by allowing the user to interact, understand, and creatively steer the automatic data analysis process.
VA faces, however, challenges like highly specialized expert visualizations, requiring expert model selection, and complex visualization/analysis technique combinations hindering interaction impact. My research pursues a Visual Quality Metrics (VQM) driven approach to overcome these drawbacks. By using quantitative VQMs as visual pattern extractors, analysts can reason over large, complex datasets through exploring interpretable visual patterns in the visualizations.
This talk will demonstrate the overall VQM concept for detecting and making use of meaningful visual patterns with the aim to make data analysis more accessible, effective, efficient, transparent, and reliable. I will show how VQMs and rapid human-in-the-loop interactions can enhance big data exploration by enabling pattern-driven data exploration without relying on specialized visualizations or analysis techniques.
About the Speaker: Michael Behrisch is a tenured Assistant Professor at Utrecht University in the Netherlands since 2017. Behrisch's research focuses on data visualization, visual analytics, and human-computer interaction, making contributions to areas like visualization of large graphs, interaction techniques, and evaluating visualization systems. He is known for his pioneering work on developing quality metrics for information visualization, which provide frameworks to evaluate and judge the quality and effectiveness of data visualizations. His work centers on interdisciplinary collaborations across domains, especially with a focus on multivariate (knowledge) graph visualization and multivariate time-series exploration. Behrisch regularly publishes at top-tier venues like IEEE VIS, EuroVis, and IEEE Transactions on Visualization and Computer Graphics (TVCG). He has served as a reviewer for these leading visualization conferences and journals.
Prior to joining Utrecht University, Behrisch held positions as a postdoctoral researcher at Harvard's Visual Computing Group and the Visual Analytics Laboratory at Tufts. Behrisch's research has made "broad and original contributions" as highlighted by his strong publication record of over 60 well-cited papers across top venues in the field of visualization and human-computer interaction.
Jan Aerts
KU Leuven
From Complexity to Comprehensibility: an Integrative View on Biological and Agricultural Systems
Date: May 8th, 2024, 2:00 pm CEST
Abstract: Biology is messy and complex. In this talk I will explore how we can embrace this complexity and look at a more holistic approach in understanding the intricate and multifaceted nature of biological and agricultural systems. Central to this discussion is the role of visual analytics, a key tool that aids researchers to embrace the inherent complexity and uncertainty in these fields and allows for multimodal data integration. The presentation will go into the nuances of data visualization and novel visual design, highlighting how they can be used as a tool for thought and for generating new hypotheses. Furthermore, I will go into the added value of topological data analysis and multilayer networks, demonstrating their efficacy in uncovering hidden patterns and connections in complex datasets.
This talk aims to illustrate how integrating these methods can lead to a deeper and more comprehensive understanding of complex biological dna agricultural systems, paving the way for more informed decision-making and innovative research breakthroughs.
About the Speaker: Jan Aerts has been appointed at KU Leuven as a Professor and has a background in omics and bioinformatics, contributing to large model organism sequencing projects. In 2010 he changed his focus to data visualization. He supports domain experts and non-experts to make sense of complex data, using data visualization, visual analytics and data analysis. His work aims to help the expert define new hypotheses in complex data, and grasp the intricacies of complex data that includes interactions, feedforward and/or feedback loops, time sensitivity, hidden subpopulations/patterns, etc. It is his conviction that data visualization is a necessary complement to machine learning and AI (by allowing the expert to drive the analysis, and take responsibility in data-driven decisions) as well as statistics (by embracing the full complexity of the data).
Micah Corah
Colorado School of Mines
Active Perception for Robot Teams: From Visual Search to Videography
Date: March 20th, 2024, 5:00 pm CET
Abstract: Over the last ten years, drones have become increasingly integrated with our society: drones film our sports, inspect our crops, survey our geography, and inspect our disaster sites. Across these domains, drones are key because they are particularly adept at maneuvering cameras and sensors to ideal vantage points in diverse environments. However, these applications often still involve either manual operation or extensive operator interaction, and the teams deploying these systems can consist of multiple operators per robot. Bridging this gap will require both more effective coordination between robots and better understanding of application domains.
My work focuses on enabling aerial robots to make intelligent decisions about how to sense, sample, and observe their environments both individually and in groups. I will start this talk by discussing active perception with individual robots in the context of searching for survivors in a subterranean environment; I will discuss how robots can quickly navigate and map such environments with careful attention to dynamics, camera views, and the interactions between the two. Given individual robots that we have endowed with the ability to intelligently observe and inspect, how can we develop teams that coordinate effectively and efficiently? Toward this end, I will turn to the problem of autonomously filming a group of people such as to film a team sport or a dramatic performance. By applying the rich theory of submodular and combinatorial optimization, simple algorithms can enable individual robots that are able to film autonomously and augment them with the ability to coordinate in teams. I will then present a distributed submodular optimization algorithm I developed (Randomized Sequential Partitions or RSP) that enables this approach to scale to large numbers of robots, and I will discuss how to apply this approach to multi-robot videography by carefully designing objectives and reasoning in terms of pixel densities.
About the Speaker: Micah Corah is an Assistant Professor in Computer Science at the Colorado School of Mines where his research will focus on aerial robots, active perception, and multi-robot teams. Before that, Micah was a postdoc in the AirLab at Carnegie Mellon University where he worked to develop teams of flying cameras for filming and reconstructing groups of moving people. Micah also competed with team CoSTAR in the DARPA Subterranean Challenge while a postdoc at JPL where he focused on aerial autonomy and multi-robot exploration in caves and mines. He completed his Ph.D. in Robotics at Carnegie Mellon University in fall 2020. His thesis work involved active perception, exploration, and target tracking for aerial robots with a focus on distributed perception planning; during this time Micah developed the first submodular optimization algorithms for multi-robot perception planning that scale to large numbers of robots while maintaining optimization guarantees.
Amanda Prorok
University of Cambridge
Using Graph Neural Networks to Learn to Communicate, Cooperate, and Coordinate in Multi-Robot Systems
Date: January 30th, 2024, 2:00 pm CET
Room: HS 1 , Youtube, opens an external URL in a new window
Abstract: How are we to orchestrate large teams of agents? How do we distill global goals into local robot policies? Machine learning has revolutionized the way in which we address these questions by enabling us to automatically synthesize decentralized agent policies from global objectives. In this presentation, I first describe how we leverage data-driven approaches to learn interaction strategies that lead to coordinated and cooperative behaviors. I will introduce our work on Graph Neural Networks, and show how we use such architectures to learn multi-agent policies through differentiable communications channels. I will present some of our results on cooperative perception, coordinated path planning, and close-proximity quadrotor flight. To conclude, I discuss the impact of policy heterogeneity on agent alignment and sim-to-real transfer.
About the Speaker: Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology, at Cambridge University, and a Fellow of Pembroke College. Her lab's research focuses on multi-agent and multi-robot systems. Their mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces. Together with her lab, Prorok pioneered methods for differentiable communication between learning agents. Their research brings in methods from machine learning, planning, and control, and has numerous applications, including automated transport and logistics, environmental monitoring, surveillance, and search.
Prior to joining Cambridge, Amanda was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP, opens an external URL in a new window) Laboratory at the University of Pennsylvania, USA. She completed her PhD at EPFL, opens an external URL in a new window, Switzerland. She has been honored by numerous research awards, including an ERC Starting Grant, an Amazon Research Award, the EPSRC New Investigator Award, the Isaac Newton Trust Early Career Award, and several Best Paper awards. Her PhD thesis was awarded the Asea Brown Boveri (ABB) prize for the best thesis at EPFL in Computer Science. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL) and Associate Editor for Autonomous Robots (AURO).
Michael Burch
University of Applied Sciences Graubünden
Eye Tracking in Visual Analytics
Date: January 10th, 2024, 2:00 pm CET
Room: S3 055 / Youtube, opens an external URL in a new window
Abstract: Visual analytics tools are complex visual interfaces that can be inspected from many perspectives like the visualizations, user interface components, interaction techniques, displays, algorithmic techniques, but even more the users - expert or non-expert ones - with their experience levels and tasks-at-hand. No matter how complex such a visual analytics tool is and on what application field its focus lies, user evaluation is a powerful concept to investigate whether the tool is understandable and useful or creates challenges on the users' sides due to many design flaws. Eye tracking is getting more and more prominent in visual analytics to understand user behavior based on visual attention and visual scanning strategies. However, the recorded eye movement data creates a new complex data source for which visual analytics is required again to find patterns, anomalies, insights, and knowledge in the eye movement data.
About the Speaker: Michael Burch studied computer science and mathematics at the Saarland University in Saarbrücken, Germany. He received his PhD from the University of Trier in 2010 in the fields of information visualization and visual analytics. After 8 years of having been a PostDoc in the Visualization Research Center (VISUS) in Stuttgart, he moved to the Eindhoven University of Technology (TU/e) as an assistant professor for visual analytics. From October 2020 he has been working as a lecturer in visualization at the University of Applied Sciences in Chur, Switzerland. Michael Burch is on many international program committees and has published more than 190 conference papers and journal articles in the field of visualization. His main interests are in information visualization, visual analytics, eye tracking, and data science.