Chair of Artificial Intelligence and Machine Learning
print


Breadcrumb Navigation


Content

Learning from Imprecise Data with an Adjusted Infimum Loss (Ba/Ma)

Topic for a bachelor/master's thesis

Short Description:

The problem of learning predictive models from imprecise data modelled in the form of sets has recently attracted increasing attention in machine learning, and various methods to tackle this problem have been proposed. This includes a method based on the idea of minimizing a generalized loss function (called optimistic superset loss of infimum loss), which compares a (precise) prediction with a set of potential target values in an “optimistic” way. While this method is well-motivated and exhibits interesting theoretical properties [2], an overly extreme optimism may sometimes incorporate a bias in the learning process. Therefore, an adjusted version of the latter has recently been proposed in [1]. The goal of the thesis is to implement this proposal for specific types of machine learning problems and to evaluate it on real data.

Prerequisites

Basic knowledge in machine learning, especially supervised learning (e.g., classification, regression) and experimental evaluation, implementation skills

Contact

Prof. Eyke Hüllermeier

References

  • [1] E. Hüllermeier, I. Couso, S. Destercke. Learning from Imprecise Data: Adjustments of Optimistic and Pessimistic Variants. Proc. SUM 2019, Compiegne, France, LNCS 11940, Springer, 2019. https://uni-paderborn.sciebo.de/s/zkGWlaqq22AkjU5
  • [2] V. Cabannes, A. Rudi, F.R. Bach. Structured Prediction with Partial Labelling through the Infimum Loss. Proc. ICML, International Conference on Machine Learning, 2020.