Topics for the upcoming semester have been updated.
Last update on topic selection procedure: 25.08.2025
Details on the procedure for the upcoming semester will become available in time.
In the meantime, feel free to contact potential supervisors about your topics of interest (see instructions).
Open Topics for Bachelor Theses
If you are looking for a Bachelor Thesis topic, please register for the Bachelor Thesis course, either 051065 LP Softwarepraktikum mit Bachelorarbeit (old curriculum) or Group 1 of 051080 LP Softwarepraktikum mit Bachelorarbeit (new curriculum since 2022W). Please look up the list of dedicated topics offered below in Section (A) in the current semester. All those listed topics are available for bachelor theses unless there is a corresponding restriction stated in the topic description. Of course, the topic will be limited in effort and scientific claims to meet the requirements and effort (12 ECTS or 15 ECTS) of a Bachelor Thesis. If you are interested and need to clarify details, do not hesitate to contact us; send an e-mail to Prof. Wolfgang Klas, Prof. Gerald Quirchmayr, or contact a member of the research group.
» Topics for Bachelor Thesis - see the listing in Section (A) below.
Please, make sure you follow the "Instructions: How to get a topic for my SPBA Bachelor Project" given here.
Before contacting us, PLEASE read the » Recommendations & Guidelines for Bachelor Thesis available here.
Open Topics for Master Theses and Practical Courses (PR, P1, P2)
In the following some of the open topics in the area of Multimedia Information Systems are listed. If you are interested and if you have an idea on a project do not hesitate to contact us; send e-mail to Prof. Wolfgang Klas or contact a member of the research group. In case of P1 or P2 projects, please, make sure you follow the "Instructions: How to get a topic for my P1 or P2 Project" given here.
In general, topics in the area of Multimedia Information Systems technologies include:
- analyze, manage, store, create and compose, semantically enrich & play back multimedia content;
- semantically smart multimedia systems;
- security.
Possible application domains include:
- Detecting conflicting information and checking facts on the Web
- Content Authoring and Management Systems
- Multimedia Web Content Management
- Robotic and IoT Applications
- Blockchain Technologies and Applications
- Interactive Display Systems
- Game-based Learning
- Service Oriented Architecture (SOA) and Cloud Based Services
Section (A) below lists topics that can be chosen in the course of a PR Praktikum, but are in principles also available for a master thesis (usually expanded and more advanced).
Section (B) below lists topics that are intended to be chosen for a master thesis.
(A) Topics for Practical Courses (SPBA, PR P1, PR P2)
CL/GQ01: Information Security Policy Repository
Different types of information security policies make it difficult to access relevant passages in individual policies and combine them into an actionable recommendation. A repository in which the policies are stored, and their sections can be accessed in relation to their relevance in each situation, ranging from editing and auditing policies to applying them to incident handling, would therefore be very helpful. To be effective, the developed repository needs to support the information security policy life cycle.
The goal of the project is to develop such a policy repository/store model and implement a prototype. For searching the policy store, AI technologies as well as traditional search mechanisms building on keywords and ontologies should be considered.
The following sources can serve as starting point for this project:
M. Alam and M. U. Bokhari, "Information Security Policy Architecture," International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007), Sivakasi, India, 2007, pp. 120-122, doi: 10.1109/ICCIMA.2007.275.
Kenneth J. Knapp, R. Franklin Morris, Thomas E. Marshall, Terry Anthony Byrd, Information security policy: An organizational-level process model, Computers & Security, Volume 28, Issue 7, 2009, Pages 493-508, ISSN 0167-4048, https://doi.org/10.1016/j.cose.2009.07.001.
Nader Sohrabi Safa, Rossouw Von Solms, Steven Furnell, Information security policy compliance model in organizations, Computers & Security, Volume 56, 2016, Pages 70-82, ISSN 0167-4048, https://doi.org/10.1016/j.cose.2015.10.006.
Hanna Paananen, Michael Lapke, Mikko Siponen, State of the art in information security policy development, Computers & Security, Volume 88, 2020, 101608, ISSN 0167-4048, https://doi.org/10.1016/j.cose.2019.101608.
The suggested structure for the paper accompanying the project is:
The suggested structure for the paper accompanying the project is:
- Introduction/Topic description/Motivation
- State of the art in literature and practice
- Modelling method and approach used
- Development of the model
- Prototype (documentation, source code, etc.)
- Test
- Discussion of the results
- Outlook and conclusion
- Tags: Contact: Gerald Quirchmayr, Christian Luidold
CL/GQ02 Threat Intelligence for Cyber Security Decision Making
Cyber Security Decision Making is becoming a core aspect of cyber defense efforts. Advanced decision models and processes, such as the OODA Loop do heavily depend on the available information. The major task of this project is to develop and implement an approach to support the OBSERVE (Information collection) and ORIENT (Information analysis) phases of this type of model.
https://www.airuniversity.af.edu/Portals/10/AUPress/Books/B_0151_Boyd_Discourse_Winning_Losing.PDF
The topic can be split into two parts:
CL/GQ02a: Develop a support approach for the OBSERVE phase based on readily available sources, such as CVEs, NVD, and MISP. The import interface should ideally be based on the STIX/TAXII standard.
CL/GQ02b: Develop a support approach for the ORIENT phase exploring the potential of “emerging patterns” and “weak signals” in network defense. The goal is to monitor internal network traffic and map it on the information collected in the OBSERVE phase.
The following sources can serve as starting point for this project:
https://levelblue.com/blogs/security-essentials/incident-response-methodology-the-ooda-loop
https://cve.mitre.org/; https://nvd.nist.gov/; https://www.misp-project.org/
https://www.oasis-open.org/2021/06/23/stix-v2-1-and-taxii-v2-1-oasis-standards-are-published/
The suggested structure for the paper accompanying the project is:
- Introduction/Topic description/Motivation
- State of the art in literature and practice
- Modelling method and approach used
- Development of the model
- Prototype (documentation, source code, etc.)
- Test
- Discussion of the results
- Outlook and conclusion
- Tags:
- Contact: Gerald Quirchmayr, Christian Luidold
CL/GQ03: A containerized communication model for communication between NIST/CSF phases
The NIST Cyber Security Framework (https://www.nist.gov/cyberframework) has become an established standard for cyber security management. With version 2.0 of this framework introducing a GOVERN function (NIST CSWP 29, The NIST Cybersecurity Framework (CSF) 2.0, February 26, 2024, p. 3), the importance of communication between the functions has increased significantly, as GOVERN addresses an understanding of organizational context; the establishment of cybersecurity strategy and cybersecurity supply chain risk management; roles, responsibilities, and authorities; policy; and the oversight of cybersecurity strategy.
The goal of this project is to develop a communications model with the GOVERN function as a central command and control hub. This model should then be followed by a prototype based on container technology. The focus of the model and protype is the communication between the GOVERN function and the other functions (see figure).
NIST CSWP 29, The NIST Cybersecurity Framework (CSF) 2.0, February 26, 2024, p. 5
The following sources can serve as starting point for this project:
NIST CSWP 29, The NIST Cybersecurity Framework (CSF) 2.0, February 26, 2024, https://www.nist.gov/cyberframework
Use containers to Build, Share and Run your applications: https://www.docker.com/resources/what-container/
The suggested structure for the paper accompanying the project is:
- Introduction/Topic description/Motivation
- State of the art in literature and practice
- Modelling method and approach used
- Development of the model
- Prototype (documentation, source code, etc.)
- Test
- Discussion of the results
- Outlook and conclusion
- Tags:
- Contact: Gerald Quirchmayr, Christian Luidold
PK06 - Automated Sports Tracker Utilizing Mobile and Wearable Technology
Problem Statement: Manually logging athletic activities is often tedious, inaccurate, and lacks detailed, real-time insights. The core challenge is to automatically make sense of raw time-series data from sensors (e.g., GPS, accelerometer, gyroscope) to identify what activity a person is doing and when it changes. This involves tackling complex problems in signal processing, pattern recognition, and data interpretation.
The project's goal is to develop and evaluate a computational method for automatically segmenting and classifying athletic activities from sensor data. It offers flexibility for students to tailor the project to their interests, allowing them to focus on one of three main directions: comparing various classification models, developing efficient algorithmic systems using change-point detection, or creating an interactive visual analytics tool for expert data exploration.
Note: This topic is designed as a comprehensive Bachelor's project (12 ECTS) and is not available for Practical Course: Computer Science projects (P1/P2) or a Master's thesis.
- Technologies: Data Science libraries (e.g., Pandas, Scikit-learn, TensorFlow), Web Technologies (e.g., React, D3.js, Firebase).
- Tags:
- Contact: Peter Kalchgruber
PK07 - Improving Transparency in the Grading Process: Developing a Web Application for Efficient and User-friendly Grade Assessment
If you reflect on your experiences as a student in school, you may recall that the process of grade assessment was not always transparent and sometimes seemed unfair. This project aims to solve this by developing a web application that enables lecturers to efficiently track student performance in real-time. The core of the project is a mobile-first interface for lecturers for quick data entry during class, and a separate, secure dashboard for students to view their current academic standing at any time.
The system's core functionality will include managing students, courses, and customizable grading criteria (e.g., class participation, homework, projects). As an advanced implementation track, the project should explore innovative concepts like an XP-based grading system[1]. Throughout development, robust data security and protection principles (e.g., GDPR compliance) are a primary concern to ensure all user data is handled responsibly.
Note: This topic is designed as a comprehensive Bachelor's project (12 ECTS) and is not available for Practical Course: Computer Science projects (P1/P2) or a Master's thesis.
[1] https://blog.haschek.at/xp-based-grading-system/
- Technologies: Frontend: React (as a PWA) / Backend: Firebase (Auth, Firestore) or Node.js
- Tags:
- Contact: Peter Kalchgruber
WK01 - Jupyter Notebooks for Dedicated Interactive Content of Courses
Jupyter Notebooks allow for the creation and sharing of documents that contain live code, equations,
visualizations,
and narrative text. Jupyter Notebooks are a well-established and well-recog
,,nized tool in academia and education in
general as well as in specific fields of research where it is important to provide for reproducibility of
scientific
results.
Goal of the project is to develop dedicated Jupyter Notebooks for specific course content relevant in the
context of
our courses (MOD, MCM, MST, MRE, MRS). The approach can be based on the existing framework that we already use
for
Juypter Notebooks in some of our courses but may also further improve or suggest new solutions for the framework
as
such. The selection of the programming language to be used needs to meet the requirements of the course content,
most probably Python, but - in fact - is very flexible as Jupyter Notebooks work with a variety of languages.
Mandatory requirement: Student must have understood the course content / material very well and should have
passed
the course already.
- Technology: Juypter Notebook, Python, Jupyter Notebook Hub of the CS-faculty, Markdown, VS Code (or similar IDE)
- Tags:
- Contact: Wolfgang Klas
WK02 - FactCheck - Precision Metrics
FactCheck is a framework for the detection and resolution of conflicting structured data on the Web. The FactCheck framework is the result of ongoing research at our research group. One of the central building blocks is the context-dependent comparison of structured data of various representations of one and the same real world object or artefact. The comparison is guided by so called precision metrics which is a flexible and sophisticated technique for logically comparing structured data values. Precision metrics consist of logical predicates used to evaluate the comparison of structured data. Goal of the project is to design and implement an appropriate model for the representation of precision metrics, the construction of such precision metrics as well as the application of the metrics for evaluating the comparison of data values. Various precision metrics should be defined and compared using a test dataset of 900.000 entities. Results of the project are to be demonstrated by a running demo application.
- Technology: Web Services, Semantic Web technologies, LOD, Microformat, JSON-LD, AI-Tools, Docker
- Provided to the students: existing implementation of framework, test dataset
- Tags:
- Contact: Wolfgang Klas and Daniel Berger
WK03 - Demo of Blockchain Application Using Ethereum
The goal of this project is the implementation of a demo application which illustrates the concept of the consensus technique, e.g., proof-of-stake, Clique (proof-of-authority) (but not the often used "proof-of-work" as, e.g., used in the Bitcoin Blockchain). For example, a possible application could be the implementation of the four-eyes principle (Vier-Augen-Prinzip) for officially approving documents by making use of two signers acting as "proof-of-authorities". There are many other possible application scenarios feasible, e.g., the decision taking principles of a management board of an association or a company, board of managers, board of trustees or directors. The application scenario should be well-chosen in order to illustrate the general principle of proof-of-authority. It may be based on a generic, configurable implementation to show different variations of the proof-of-authority concept, e.g., 1 signer, 2 signers, N signers. The demo application has to be realized such that a short demonstration movie can be recorded, that will be published on the Lab's website.
- Technology: Ethereum, Web technologies, Docker
- Tags:
- Contact: Wolfgang Klas
WK06 - "Studienleistungs & Prüfungspass" Based on Ethereum Blockchain Technology
The goal of this project is - starting out from a given demo implementation - to implement an application for a digital "Studienleistungs & Prüfungspass" (study performance & examination pass) based on blockchain technology. The pass will record the individual, required study achievements (like milestones, tests, etc.) during a course, the final grading of a course, and the collection of gradings of courses during the entire study (like a "Sammelzeugnis" currently used by the university). There are various stakeholders in this scenario: the students, the lectures of courses, and administration (like SPL). The implementation has to be realized based on Ethereum Blockchain technology, which provides the concept of Smart Contracts. Ethereum Smart Contract technology is one of the most promising implementations for smart behavior of blockchain systems. The focus will be on the proper design and implementation of smart contracts to capture most of the functionality of the application.
- Technology: Ethereum Blockchain Infrastructure, on Linux of Windows, or on Cloud Infrastructure, Web-Technologies for implementing Web-based application, Docker.
- Provided to the students: Optionally, virtual machine
- Tags:
- Contact: Wolfgang Klas
WK07 - Securing Images and Videos by Applying Blockchain Technology
The goal of this project is the design and the implementation of a framework based on blockchain technology that allows for the detection of manipulations in images and videos. Images or videos can be manipulated, e.g., persons (or other objects) can be added to or removed from an image, video frames (or sequences of video frames) can be added or removed from a video. Such a manipulation should be detected based on the storage of specific image encoding parts in a blockchain which allows to re-check the validity of an image encoding. E.g., essential macroblocks or portions of some macroblocks of a JPEG-encoded image could be stored in a blockchain such that it can be checked whether an image still consists of those macroblocks or includes manipulated macroblocks. The project will first have to select and specify the kind of manipulations to be considered in the scope of the project, then design an approach and a framework and implement a prototype and a demo application illustrating the approach, based on a specific blockchain platform that suits best the needs of the application.
- Technology: Blockchain Infrastructure (like Ethereum) on Linux, Windows, or on Cloud Infrastructure, JPEG, MPEG, Web-Technologies for implementing Web-based demo application, Docker.
- Provided to the students: Optionally, virtual machine
- Tags:
- Contact: Wolfgang Klas
WK09 - FactCheck - IdaFix Browser-Extension UI for a Chatbot
The FactCheck framework is designed to address the issue of conflicting data on the Web by providing a systematic approach to detect and resolve such discrepancies. It encompasses the entire fact comparison process, including data acquisition, comparison, presentation of results, and advanced analysis features. As a pioneering research initiative of our research group, FactCheck presents several challenging aspects and opportunities in its development and implementation.
This project aims to find a solution for a user interface which allows an end user visiting a web page to understand the comparison results on conflicting information as well as to provide user feedback to the FactCheck system behaviour. The interface should be realized as an interactive chatbot. The startig point for the project is a prototypically implemented browser extension (IdaFix) which illustrates the functionality as well as the internal system API to be used.
- Technology: Web Browser technologies, e.g., JavaScript, HTML & CSS, Browser APIs (e.g., WebExtensions API), Background Scripts, Content Scripts, Popup Scripts, Messaging APIs, JSON-LD
- Tags:
- Contact: Wolfgang Klas and Marie Aichinger
AH01 - Structured Data Extraction from Unstructured Web Content
In many real-world applications, extracting structured data from unstructured text on web pages is a critical task. While numerous Natural Language Processing (NLP) approaches claim to handle this challenge, implementing a robust and scalable solution remains a key area of exploration. This project focuses on leveraging state-of-the-art information extraction models to build a relation extraction web service capable of transforming unstructured text into structured data.
-
Develop a Relation Extraction Web Service:
- Implement a web service that processes web pages as input and outputs extracted triplets (subject, predicate, object) based on a pre-defined schema.
- Use cutting-edge NLP and information extraction techniques to ensure accuracy and scalability.
-
Create a User-Friendly Web Application:
- Design a front-end web application that interacts with the web service.
- Provide an intuitive interface for users to input web pages and view the extracted structured data.
In Praktikum 1 (P1), you will extract structured data from Wikipedia articles and compare the extracted data to Wikidata. This practicum will focus on understanding the relationship between unstructured and structured data and evaluating the accuracy of the extraction process. For Praktikum 2 (P2), you will build an extraction module for FactCheck that:
- Scrapes web page content.
- Extracts information about individuals mentioned on the page.
- Groups the extracted triplets by individual.
- Uses the FactServer's compare endpoint to validate and compare the extracted information with existing data.
Gain hands-on experience with cutting-edge NLP and information extraction models. Learn how to bridge the gap between unstructured and structured data. Work on real-world applications like Wikipedia data analysis and conflict detection systems. Develop skills in web service development, front-end design, and system integration.
- Technology: Technologies: Web Application, Web Service, Python, Javascript, Hugging Face Transformers Library, SpaCy Library, PyTorch, Schema.org
- Tags:
- Contact: Adrian Hofer
AH02 - Unified Entity Linking Web Service
FactCheck is a framework for detecting and resolving conflicting data on the Web. It establishes an entire fact comparison process that consists of data acquisition, data comparison, the presentation of comparison results, and comprehensive analysis functions. FactCheck is a leading research topic of our research group and bears challenges in many aspects. To enhance data acquisition, your task is to link the extracted information to existing knowledge bases.
Named Entity Recognition (NER) and Entity Linking (EL) are critical components of natural language processing (NLP) applications, enabling machines to identify and link entities (e.g., people, places, organizations) in text to structured knowledge bases. However, the vast array of available entity linking APIs—such as DBpedia Spotlight, WAT, and Stanford NLP—often yield inconsistent results due to differences in their underlying algorithms and datasets. This inconsistency poses a challenge for developers and researchers seeking reliable and unified entity linking solutions.
This project aims to address this challenge by creating a web service that integrates multiple entity linking tools, allowing users to configure and combine them flexibly. The service will be showcased through an intuitive web application, enabling users to link named entities in web pages or text using customizable configurations.
- Technology: Web Application, Web Service, Python, Javascript, Hugging Face Transformers Library, SpaCy Library, PyTorch, Schema.org
- Tags:
- Contact: Adrian Hofer
AH03 - Intelligent Web Scraping
The web is vast and diverse, making the task of scraping web pages both challenging and intricate. Yet, extracting meaningful content from web pages is crucial for applications like conflict detection, where identifying discrepancies between sources requires precise and structured data. This project focuses on developing advanced web scraping techniques that can segment web pages into meaningful sections and extract user-relevant content with configurable granularity and depth.
For Praktikum 1 (P1), build a news aggregator for a selected subset of web pages. This will involve designing a system to collect, organize, and display news content from multiple sources in a user-friendly format. In Praktikum 2 (P2), you will develop a web scraping tool that integrates with the FactCheck server. This tool will extract content from web pages and connect it to a fact-checking system to identify and analyze potential conflicts between sources.
Gain hands-on experience with web scraping techniques and tools. Learn how to handle the complexities of heterogeneous web content. Enhance your skills in data processing, conflict detection, and system integration. This project is ideal for students interested in web technologies and building impactful tools for information analysis.
- Technology: Web Service, Web Application, Scraping, Crawling, Conflict Detection
- Tags:
- Contact: Adrian Hofer
AH04 - Model Context Protocoll for Information Extraction
The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. MCP is a framework for managing interactions between AI models and their context, making it particularly useful for tasks like information extraction. We want to investigate the capability of that tool for an application in our research context.
Develop a versatile system that processes unstructured data from various sources, such as documents, real-time streams, or conversational transcripts, to extract and organize key information into structured, actionable formats. The system should leverage context-aware capabilities to summarize content, identify relationships, track evolving details, and provide users with interactive tools for querying and visualizing insights, while maintaining contextual consistency across multiple inputs.
- Technology: Web Service, Web Application, MCP, LLM
- Tags:
- Contact: Adrian Hofer
DB01 - Developing a mapping method to consistently translate between vocabularies (target group P1/P2)
FactCheck is a framework for detecting and resolving conflicting data on the Web. It establishes an entire fact comparison process that consists of data acquisition, data comparison, the presentation of comparison results, and comprehensive analysis functions. FactCheck is a leading research topic of our research group and bears challenges in many aspects.
We define facts as pieces of information that are published by data providers (e.g., as textual content in their website(s)). If two or more websites publish data on the same topic, we humans can compare the data critically. However, this task is quite difficult for a machine, as they do not have an inherent understanding of semantics. Imagine the following example:
You visit website A, which states that Vienna has 1.815.231 inhabitants, while website B states Vienna has approximately 2.000.000 inhabitants. Depending on the context, both numbers can be seen as true or not precise enough. This is a problem, as we cannot tell if the numbers are similar enough or if one of them is too far from the truth, making it a conflict. Now imagine a website C, which states the population of Vienna is 2.000.000. A new problem emerged, as website C offers us the same fact as website A and B, however they are using "population" instead of "inhabitants". As humans, we can tell that we now have two sources that agree, B and C.
However, we cannot make sure that the two websites use the same vocabulary (here, "inhabitants" and "population"). A machine is unable to understand the similarity between these concepts like a human would. Furthermore, websites may use structured data but incorrect properties by mistake. In both cases, our ability to compare facts is inhibited. This topic aims to develop a method that relies on structured data from websites to properly validate and translate between schemata.
- Technology:Schema.org, Python, NLTK, Scikit-learn, Azure Cloud Services
- Tags:
- Contact: Daniel Berger
DB02 - String comparison methods for Fact Comparison (target group P1/P2)
FactCheck is a framework for detecting and resolving conflicting data on the Web. It establishes an entire fact comparison process that consists of data acquisition, data comparison, the presentation of comparison results, and comprehensive analysis functions. FactCheck is a leading research topic of our research group and bears challenges in many aspects.
We define facts as pieces of information that are published by data providers (e.g., as textual content in their website(s)). If two or more websites publish data on the same topic, we humans can compare the data critically. However, the comparison of data is not trivial. Imagine the following example:
Website A has information about musicians and states the name "Taylor Swift." Website B about this particular pop musician also states a name, however, as "Tailor Swift." Website C splits the name property into the first name "T." and the last name "Swift."
As humans, we can compare these strings, identify issues, and acknowledge abbreviations and spelling errors. However, for a machine, these things are a tricky challenge. Furthermore, there are other problems that may be faced in string comparison (name-to-nickname comparison, comparison of longer text, capitalization and spelling errors, homonyms,…).
This topic aims to develop a method that can reliably handle string comparisons based on the different schema types available and proves to be highly accurate in the results.
- Technology:Python, NLTK, Scikit-learn, Azure Cloud Services, Schema.org
- Tags:
- Contact: Daniel Berger
DB 03 - Dynamic Code Execution for Fact Comparison (target group P1/P2)
FactCheck is a framework for detecting and resolving conflicting data on the Web. It establishes an entire fact comparison process that consists of data acquisition, data comparison, the presentation of comparison results, and comprehensive analysis functions. FactCheck is a leading research topic of our research group and bears challenges in many aspects.
We define facts as pieces of information that are published by data providers (e.g., as textual content in their website(s)). If two or more websites publish data on the same topic, we humans can compare the data critically. However, this task is quite difficult for a machine, as they do not have an inherent understanding of text and its semantics.
A comparison between two data points may appear simple. However, we often encounter objects that contain multiple attributes and relationships. To be able to compare these objects, a more complex structure for comparison shall be created. Experts may also use individualized code for their comparison strategies.
The goal of this project is to develop a customizable execution framework that allows experts to write, execute, and manage custom code for fact comparison. This framework should support the integration of execution results and logging information into the system, ensuring transparency and traceability. By enabling dynamic code execution, the project aims to empower experts with the tools needed to handle complex fact comparison scenarios effectively.
- Technology:Python, Flask, Azure Cloud Services, Schema.org, REST, RabbitMQ
- Tags:
- Contact: Daniel Berger
DB 04 - Game Based learning (target group SPBA)
Game-based learning leverages the principles of serious gaming to create engaging and effective educational experiences. This project involves designing and developing a game-based learning system tailored to the content and learning goals of a specific lecture or course. The system should focus on how the content is presented, the mechanics used to engage learners, and how the achievement of learning goals is assessed.
Key aspects to explore include defining the learning objectives, designing game mechanics that align with these objectives, and ensuring the content is delivered in an interactive and meaningful way. The project should evaluate whether the learning goals are met and how the game enhances the educational experience.
The goal of this project is to research modern game-based learning techniques and to translate some lecture material into a gamified experience. The application should test skills in selected topics, provide assistance in case of failure of a module and reward the user by successfully completing tasks.
- Technology:Game Engines (Godot, Unreal, Unity, etc.), Web Technologies, Mobile Technologies
- Tags:
- Contact: Daniel Berger
DB 05 - Multimedia Management and Playback system (target group SPBA)
Organizing and managing large collections of multimedia content, such as images, videos, and audio files, can be a complex task. This project involves developing a multimedia management system that focuses on features like automatic tagging, sorting, and metadata extraction based on media content. The system should also allow users to insert, edit, and manage metadata to enhance organization and searchability.
In addition to management, the system should support the display and playback of media objects, providing users a seamless way to interact with collections. Graph-based structures can be explored to represent relationships between media objects, enabling features like recommending the next media object to play based on these relationships. The system should be intuitive, user-friendly, and adaptable to various use cases.
- Technology:UML, Azure Cloud Services, RDF / Knowledge Graphs
- Tags:
- Contact: Daniel Berger
DB 06 - Mobile App: Route finder for large buildings (target group SPBA)
Navigating large or complex buildings can be challenging, especially for individuals with specific accessibility needs. This project aims to develop a mobile application and framework to create digital maps that assist users in navigating unfamiliar buildings. Administrators will be able to map out buildings using 2D/3D tools, while users can select start and end points to receive navigation guidance. The system should consider user preferences, such as avoiding stairs or inaccessible routes, and provide a seamless navigation experience.
The application can be adapted for various use cases, such as shopping centers, hospitals, universities, or airports, and should focus on improving accessibility and ease of use for all visitors.
- Technology:UML, Mobile Technologies, Web Technologies, Firebase
- Tags:
- Contact: Daniel Berger
DB 07 - UML Model analyzer (target group SBPA)
Analyzing and syntactically validating UML models from non-editable formats is a highly complex and advanced challenge. This project involves developing a tool that can read UML models (e.g., Use Case, Class Diagram, ER Diagram, Sequence Diagram) from fixed formats such as JPEG, digitally reconstruct the model, and check its validity. The focus is on exploring methods to interpret and analyze these models, reconstruct their structure, and assess their correctness.
Given the advanced nature of the task, the project encourages a thorough exploration of potential approaches, evaluating their feasibility, and pushing the boundaries of what is achievable. While the problem is challenging, the goal is to make meaningful progress and critically assess the strengths and limitations of the chosen methods.
- Technology:UML, Azure Cloud Services, Computer Vision
- Tags:
- Contact: Daniel Berger
MSA02 - FactCheck - A Comparison of Frontend Frameworks for Web Extensions
Note: Previous experience with at least one frontend framework is recommended.
The FactCheck framework aims to address the issue of conflicting data on the Web by providing a systematic approach to detect and resolve such discrepancies. It encompasses the entire fact comparison process, including data acquisition, comparison, presentation of results, and advanced analysis features. As a pioneering research initiative of our research group, FactCheck presents several challenging aspects and opportunities in its development and implementation.
Frontend frameworks, such as Angular, React, or Vue, have become essential for building responsive, modular web applications. Increasingly, they also find use in Web Extensions (add-ons that add new or enrich existing functionality of a Web browser). Your task is to reimplement our browser extension IdaFix, which is currently using an older frontend framework, using modern frontend frameworks. As part of your work, you will...
- redesign the user interface of IdaFix
- research about, and compare, various frontend frameworks
- reimplement (parts of) IdaFix using two or more frontend frameworks of your choice
- compare the implementations, and reflect on their similarities and differences by means of written report
- Technologies: AngularJS, HTML, CSS, JavaScript, TypeScript, Manifest V3, Angular, React, Vue
- Tags:
- Contact: Marie Aichinger
MSA03 - FactCheck - Semantic Search for Fact Data
Recommended prerequisite: Multimedia and Semantic Technologies (MST)
The FactCheck framework aims to address the issue of conflicting data on the Web by providing a systematic approach to detect and resolve such discrepancies. It encompasses the entire fact comparison process, including data acquisition, comparison, presentation of results, and advanced analysis features. As a pioneering research initiative of our research group, FactCheck presents several challenging aspects and opportunities in its development and implementation.
Currently, FactCheck collects information from the Web via IdaFix and dedicated crawlers, and provides the resulting insights via a Web API that serves, for the most part, JSON. Allowing complex, semantically rich queries over our collected data may be beneficial in delivering our results in a semantic-web-friendly way. Your task will be to enrich our existing FactCheck prototype(s) with semantic web technologies. As part of your work, you will...- revisit our currently document-based data model, and redesign it to a triple/graph-based model more closely aligned to semantic web standards like OWL or RDF; this may additionally involve...
- finding suitable vocabularies (e.g., RDF-Cube)
- writing a script to automate the conversion from the document-based model to your new triple-based one
- enriching our existing data set with data collected from LOD collections (e.g., DBPedia)
- investigate a suitable storage solution (e.g., a triple store such as Apache Jena or RDF4J) for storing the redesigned data
- enable semantic search by configuring a suitable SPARQL endpoint and interface (e.g., Virtuoso, YASGUI), and optionally also hosting a customized version of DBPedia Lookup
- Technologies: Python, rdflib, SPARQL, Docker, Java
- Tags:
- Contact: Marie Aichinger
MSA04 - FactCheck - Design and Deploy A Scalable Statistical Framework for Fact Data
The FactCheck framework aims to address the issue of conflicting data on the Web by providing a systematic approach to detect and resolve such discrepancies. It encompasses the entire fact comparison process, including data acquisition, comparison, presentation of results, and advanced analysis features. As a pioneering research initiative of our research group, FactCheck presents several challenging aspects and opportunities in its development and implementation.
A core aspect of FactCheck is the generation of statistical insights and metrics from the crawled fact data. Your task is to reimplement our existing statistics API as a scalable stand-alone application using an (ideally Python-based) technology stack of your choice (e.g., PySpark, Pandas, NumPy). The key steps will involve...
- Fact Data Exploration: Explore our existing fact database and familiarize yourself with our data model.
- Fact Data Extraction: Extract thousands of fact data from our existing database to be used as your starting dataset, and adapt the data schema if needed.
- Metrics: Develop new or refine existing metrics from the fact data.
- API Reimplementation: Rebuild the statistics API from the ground up. Optionally, you may also create an interface that showcases its abilities.
- Deploy and Test: Deploy and test your newly developed solution alongside our server using Docker.
If needed, a suitable virtual machine will be provided to you. Depending on your strengths and interests, you may focus on the data science aspects (orchestration, generation of statistics, data wrangling, etc.) or on creating a frontend (e.g., a dashboard, a Jupyter Notebook) to visualize them.
- Technologies: Python, CouchDB, PySpark, Pandas, NumPy, Jupyter Notebooks, Docker
- Tags:
- Contact: Marie Aichinger
MSA05 - FactCheck - Serious Games for Information Comparison
The FactCheck framework aims to address the issue of conflicting data on the Web by providing a systematic approach to detect and resolve such discrepancies. It encompasses the entire fact comparison process, including data acquisition, comparison, presentation of results, and advanced analysis features. As a pioneering research initiative of our research group, FactCheck presents several challenging aspects and opportunities in its development and implementation.
Serious games, or gamification, refer to applications with a primary purpose beyond entertainment - such as teaching new skills, crowd-sourcing data, or engaging users with a system in new and innovative ways. Your task is to explore the use of serious games and gamification elements for FactCheck. First, you will identify which aspect(s) of FactCheck you would like to gamify, and then design and implement a prototypical serious game using a technology stack of your choice (e.g., as a progressive Web app) which allows the application to run on the Web and communicate with our APIs and databases.
Aspects you may gamify include…
- Entity Resolution and Linking: given our existing fact data and entity resolution results, have users verify existing entity linking results, or perform the linking themselves using an interactive interface
- Fact Data Exploration: given our existing fact data, provide a gamified interface for users to explore and compare facts
- Feedback on Comparison Results: given our comparison API, have users compare data from various websites, and allow them to give feedback tailored towards improving our comparison processes
Alternatively, you may design a game that, using FactCheck concepts and APIs, teaches players about critical thinking, media literacy, or statistical literacy.
- Technologies: JavaScript, TypeScript, Angular, React, WebGL, Docker
- Tags:
- Contact: Marie Aichinger
MSA06 - FactCheck - MSA06 - FactCheck - Observability/Telemetry Framework
The FactCheck framework aims to address the issue of conflicting data on the Web by providing a systematic approach to detect and resolve such discrepancies. It encompasses the entire fact comparison process, including data acquisition, comparison, presentation of results, and advanced analysis features. As a pioneering research initiative of our research group, FactCheck presents several challenging aspects and opportunities in its development and implementation.
Observability (O11y) describes the ability to understand the internal state of a system using only its outputs (e.g., logs, or metrics such as CPU usage or average response time). As the FactCheck framework grows and becomes more distributed, the ability to debug and troubleshoot from persisted logs alone becomes increasingly difficult and time-consuming. Your task is it to implement a robust, scalable O11y framework using Grafana tools (e.g., Alloy, Loki) and other technologies (e.g., Prometheus). In your project, you will...
- gain an overview of the FactCheck prototype, and choose one component [P1] / at least two components from which you will collect telemetry data
- learn about key O11y concepts, and familiarize yourself with potential technologies to be used
- leverage existing telemetry data (e.g., logs), and/or implement new telemetry data for your chosen component(s) using zero-code and/or code-based instrumentation
- analyze and visualize the collected data by means of Grafana's dashboard creator
- Technologies: Python, JavaScript, OpenTelemetry, OpenMetrics, Grafana, Prometheus, Docker
- Tags:
- Contact: Marie Aichinger
(B) Topics of Master Theses
Please check the listing below for possible topics for a master thesis. In principle, you may also choose from the topics listed in Section (A) above. Those topics are available for a master thesis as well, but usually in a more expanded or advanced form.
- FactChecking: Models and Languages of Precision Metrics for comparing facts on the Internet.
- FactChecking: Flexible, configurable framework for crawlers for extracting facts from web pages.
- FactChecking: AI-based text analysis tools for extracting facts from the Internet.
- FactChecking: Multimedia content (images, audio, video) analysis tools (including the use of Azure AI tools and services) for extracting facts from the Internet.
- FactChecking: Analysis of Cloud-based storage systems/services and design of a storage framework for a FactChecking prototype.
- FactChecking: Analysis and extraction of structured information from videos using state-of-the-art AI technology
- FactChecking: Analysis and extraction of structured information from images using state-of-the-art AI technology
- FactChecking: Analysis and extraction of structured information from text on the Web (news articles, scientific articles, Wikipedia, movie descriptions, etc.) using state-of-the-art AI technology and methods such as named entity recognition, key phrase recognition, and finding linked entities.
- Blockchain-based collection of semantically-correlated statements available on the Web, given by individual persons over time.
- Blockchain-based distributed media content management (e.g., using Blockchain to track images, video).
- Blockchain technology based on a microservice cloud architecture (e.g., following the approach of Edge/Fog Computing).
- Blockchain technology for providing trust in a FactCheck platform (FactCheck is a framework for the detection and resolution of conflicting structured data on the Web).
- Evaluation of platforms of specific Distributed Ledger Technology / Blockchain Technologies that vary in terms of consensus-model, validation-process, privacy-settings, e.g., technology platforms Cardano, Hashgraph, IOTA, Monero, EOS, NEO ([iteratec]).
- Blockchain-based image manipulation detection by using JPEG-specific image encoding information like macroblocks.
- Blockchain-based video manipulation detection by using MPEG-specific video encoding information like macroblocks and motion encoding.
- Enhancing blockchain technology by fast indexing and search/querying functionality using/integrating elastic-search or graph database technology.
- Enhancing blockchain technology by integrating a data model layer that offers a semantically enriched data model (e.g., XML-based, RDF-based, UML-based) to a blockchain application layer.
- Interactive course content components based on Jupyter Notebooks for a dedicated course (e.g., MRE, MRS, MCM, MST, DMP) offered in the Bachelor's or Master's program.
... additional, new topics will become available in near future. In the case of Master Theses topics you may also contact Prof. Klas, Prof. Quirchmayr, or a researcher of the MIS group to find out more about possible topics.