Chair of Artificial Intelligence and Machine Learning
print


Breadcrumb Navigation


Content

Research Topics

The research activities of our group are focused on machine learning, a scientific discipline at the intersection of computer science, statistics, and applied mathematics. Over the last decades, the importance of machine learning has continuously grown, and the field has developed into one of the main pillars of modern artificial intelligence as well as the emerging research field of data science.

Uncertainty and Preference in Machine Learning

Much of our research centers around two key cognitive concepts of artificial intelligence: uncertainty and preference.

Machine learning is essentially concerned with extracting models from data and using these models to make predictions. As such, it is inseparably connected with uncertainty. Indeed, learning in the sense of generalizing beyond the data seen so far is necessarily based on a process of induction, i.e., replacing specific observations with general models of the data-generating process. Such models are always hypothetical, and the same holds true for the predictions produced by a model. In addition to the uncertainty inherent in inductive inference, other sources of uncertainty exist, including incorrect model assumptions and noisy data. Our research addresses questions regarding appropriate representations of uncertainty in machine learning, how to learn from uncertain and imprecise data, and how to produce reliable predictions in safety-critical applications.

The notion of “preference” has a long tradition in economics and operational research, where it has been formalised in various ways and studied extensively from different points of view. Nowadays, it is a topic of key importance in artificial intelligence, where it serves as a basic formalism for knowledge representation and problem-solving. The emerging field of preference learning is concerned with methods for learning preference models from explicit or implicit preference information, which are typically used for predicting the preferences of an individual or a group of individuals in new decision contexts. While research on preference learning has been specifically triggered by applications such as “learning to rank” for information retrieval (e.g., Internet search engines) and recommender systems, the methods developed in this field are useful in many other domains as well.

Extensions of Supervised Learning

Other research works deal with extensions or generalizations of the standard setting of supervised learning. For example, while machine learning methods typically assume data to be represented in vectorial form, representations in terms of structured objects, such as graphs, sequences, or order relations, appear to be more natural in many applications. Moreover, representations in terms of sets or distributions are important to capture uncertainty and imprecision. Developing algorithms for learning from such kinds of data is specifically challenging. Our activities in this field include research on machine learning methods for structured output and multi-target prediction, predictive modeling for complex structures (including preference learning as an important special case), as well as weakly and self-supervised learning.

Another direction in which the standard setting of supervised learning can be generalized is from batch to online learning or, stated differently, from learning in a static to learning in a dynamic environment. In this regard, we are specifically interested in bandit algorithms, reinforcement learning, and learning on data streams. In contrast to the standard batch setting, in which the entire training data is assumed to be available a priori, these settings require incremental algorithms for learning on continuous and potentially unbounded streams of data. Thus, the training and prediction phase are no longer separated but tightly interleaved. The development of algorithms for online learning is especially challenging due to various constraints the learner needs to obey, such as bounded time and memory resources (adaptation and prediction must be fast, perhaps in real-time, and data cannot be stored in its entirety). Besides, learning algorithms must be able to react to possibly changing environmental conditions, including changes in the underlying data-generating process.

Automated Machine Learning (AutoML)

Tackling a predictive modeling task with machine learning requires the design of a suitable "machine learning pipeline”, i.e., the selection and parameterization of machine learning algorithms for specific subtasks and their combination into an overall solution. Doing this manually is difficult and often cumbersome because the space of candidate pipelines is huge. For example, most machine learning algorithms have parameters themselves, called hyperparameters (to distinguish them from the parameters of models learned by the algorithm). These may have a strong influence on an algorithm’s performance, i.e., the quality of models induced by the algorithm, but systematically searching for optimal hyperparameter configurations is a tedious and time-consuming task.

In response to this, and in light of the increasing need for practical ML solutions, automated machine learning (AutoML) has recently emerged as a new branch of ML research. AutoML is commonly understood as the task of automating the process of engineering a machine learning pipeline specifically tailored to a problem at hand. Thus, compared to “basic” machine learning algorithms such as neural networks, which solve a concrete learning task, an AutoML tool can be seen as solving a “learning to learn” problem. For the standard problem classes such as classification and regression, several AutoML tools have been proposed in the last couple of years, and their performance has been demonstrated quite impressively in several experimental studies. In particular, this includes methods for algorithm selection (i.e., given a set of candidate algorithms, selecting the one that is best suited for the problem at hand) and algorithm configuration (i.e., selecting optimal hyperparameters of an ML algorithm).

Explainable Artificial Intelligence (XAI)

With a growing number of practical applications, notions like transparency, interpretability, and understandability of AI models have received increasing attention in recent years, also due to political regulations claiming a “right to explanation”. The latter refers to cases where AI models make personalized recommendations, make important decisions, or act on behalf of people. In such cases, it is not only the quality, accuracy, or correctness of recommendations, decisions, or actions that matter. Instead, there is a natural desire to understand why a certain decision or prediction was made by the AI model – just think of a medical diagnosis or the recommendation of a medical treatment as an example. However, if the prediction was produced by a machine learning model, which has been trained in a data-driven way, this is not necessarily the case. On the contrary, many machine learning models, such as neural networks, have a “black box” character. The field of explainable artificial intelligence (XAI) is devoted to the development of tools and methods that produce more explainable models or help people understand AI models and the predictions made by such models.

Applications

Although the focus of our research is on theoretical foundations and methodological problems, we are also interested in practical applications of machine learning and artificial intelligence. Jointly with colleagues from other disciplines, we have been working on applications in engineering, economics, the life sciences, and the humanities. In addition, we are also collaborating with partners from the industry.

Social and Societal Implications

Artificial intelligence and machine learning have a far-reaching influence on our society. Being aware of the potential impact of algorithms for data analytics and automated decision-making on people and daily life, we critically analyse the implications of AI research together with colleagues from the social sciences.