Research

Google Scholar Profil

Explainable Machine Learning for Milling
Quality Prediction

This paper presents an explainable machine learning approach for predicting surface roughness in milling. Utilizing a dataset from milling aluminium alloy 2017A, the study employs random forest regression models and feature importance techniques. The key contributions include developing ML models that accurately predict various roughness values and identifying
redundant sensors, particularly those for measuring normal cutting force. Removing these sensors can reduce costs without compromising predictive accuracy, demonstrating the potential of explainable ML to enhance cost-effectiveness in machining.

Authors: Dennis Gross, Helge Spieker, Arnaud Gottlieb, Ricardo Knoblauch, Mohamed Elmansori

Enhancing Manufacturing Quality Prediction Models through the Integration of Explainability Methods

This research presents a method that utilizes explainability techniques to amplify the performance of machine learning (ML) models in forecasting the quality of milling processes, as demonstrated in this paper through a manufacturing use case. The methodology entails the initial training of ML models, followed by a fine-tuning phase where irrelevant features identified through explainability methods are eliminated. This procedural refinement results in performance enhancements, paving the way for potential reductions in manufacturing costs and a better understanding of the trained ML models. This study highlights the usefulness of explainability techniques in both explaining and optimizing predictive models in the manufacturing realm.

Authors: Dennis Gross, Ricardo Knoblauch, Helge Spieker, Arnaud Gottlieb

Published: ICAART 2024 (International Conference on Agents and Artificial Intelligence 2023)

Probabilistic Model Checking of Stochastic Reinforcement Learning Policies

We introduce a method to verify stochastic reinforcement learning (RL) policies. This approach is compatible with any RL algorithm as long as the algorithm and its corresponding environment collectively adhere to the Markov property. In this setting, the future state of the environment should depend solely on its current state and the action executed, independent of any previous states or actions. Our method integrates a verification technique, referred to as model checking, with RL, leveraging a Markov decision process, a trained RL policy, and a probabilistic computation tree logic (PCTL) formula to build a formal model that can be subsequently verified via the model checker Storm.
We demonstrate our method’s applicability across multiple benchmarks, comparing it to baseline methods called deterministic safety estimates and naive monolithic model checking.
Our results show that our method is suited to verify stochastic RL policies.

Authors: Dennis Gross, Helge Spieker

Published: ICAART 2024 (International Conference on Agents and Artificial Intelligence 2023)

Reproducing Experiments on the Verification of Neural Networks

Artificial neural networks have been widely adopted for various applications, such as autonomous driving and smart factories.
However, reproducibility, a critical factor in many neural network studies, is frequently overlooked.
In this study, we focus on the reproducibility of formal verification studies related to neural networks.
Formal verification is a rigorous method that employs mathematical analysis to verify whether a system meets its specifications.
Specifically, we investigate the reproducibility of three studies in this field by analyzing the reproducibility of their experimental results.

Authors: Dennis Gross

Workshop Presentation: Second Workshop on Reproducibility and Replication of Research Results (RRRR 2023)

Model Checking for Adversarial Multi-Agent Reinforcement Learning with Reactive Defense Methods

Cooperative multi-agent reinforcement learning (CMARL) enables agents to achieve a common objective.
However, the safety (a.k.a. robustness) of the CMARL agents operating in critical environments is not guaranteed.
In particular, agents are susceptible to adversarial noise in their observations that can mislead their decision-making.
So-called denoisers aim to remove adversarial noise from observations, yet, they are often error-prone.
A key challenge for any rigorous safety verification technique in CMARL settings is the large number of states and transitions, which generally prohibits the construction of a (monolithic) model of the whole system.
In this paper, we present a verification method for CMARL agents in settings with or without adversarial attacks or denoisers.
Our method relies on a tight integration of CMARL and a verification technique referred to as model checking.
We showcase the applicability of our method on various benchmarks from different domains.
Our experiments show that our method is indeed suited to verify CMARL agents and that it scales better than a naive approach to model checking.

Authors: Dennis Gross, Christoph Schmidl, Nils Jansen, and Guillermo A. Pérez

Published: ICAPS 2023 (International Conference on Automated Planning and Scheduling 2023)

Turn-based Multi-Agent Reinforcement Learning Model Checking

In this paper, we propose a novel approach for verifying the compliance of turn-based multi-agent reinforcement learning (TMARL) agents with complex requirements in stochastic multiplayer games.
Our method overcomes the limitations of existing verification approaches, which are inadequate for dealing with TMARL agents and not scalable to large games with multiple agents.
Our approach relies on tight integration of TMARL and a verification technique referred to as model checking.
We demonstrate the effectiveness and scalability of our technique through experiments in different types of environments.
Our experiments show that our method is suited to verify TMARL agents and scales better than naive monolithic model checking.

Author: Dennis Gross

Published: ICAART 2023 (International Conference on Agents and Artificial Intelligence 2023)

Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking

Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise in their observations that can mislead their policies and decrease their performance.
However, an adversary may be interested not only in decreasing the reward, but also in modifying specific temporal logic properties of the policy.
This paper presents a metric that measures the exact impact of adversarial attacks against such properties.
We use this metric to craft optimal adversarial attacks.
Furthermore, we introduce a model checking method that allows us to verify the robustness of RL policies against adversarial attacks.
Our empirical analysis confirms (1) the quality of our metric to craft adversarial attacks against temporal logic properties, and (2) that we are able to concisely assess a system’s robustness against attacks.

Authors: Dennis Gross, Nils Jansen, Thiago Simao, and Guillermo A. Pérez

Arxiv: https://arxiv.org/abs/2212.05337

Published: ICAART 2023 International Conference on Agents and Artificial Intelligence

COOL-MC: A Comprehensive Tool for Learning and Model Checking

Deep Reinforcement Learning (RL) is a state-of-the-art technique to synthesize sequential decision-making policies.
However, these synthesized policies lack guarantees and limit the applications of deep RL in safety-critical scenarios.
Probabilistic model checking can verify these policies, but no tool support RL and model checking.
This tool support is needed to foster further integration of RL and model checking.
COOL-MC makes it straightforward to synthesize and verify policies by extending the OpenAI gym with a probabilistic model checking interface.

Authors: Dennis Gross, Nils Jansen, Sebastian Junges, and Guillermo A. Pérez

Workshop Presentation: RL-CONFORM_IROS 2021

Arxiv: https://arxiv.org/abs/2209.07133

Published: SETTA 2022 Symposium on Dependable Software Engineering: Theories, Tools and Applications

GitHub-Repository: https://github.com/DennisGross/COOL-MC

Robustness Verification for Classifier Ensembles

We give a formal verification procedure that decides whether a classifier ensemble is robust against arbitrary randomized attacks. Such attacks consist of a set of deterministic attacks and a distribution over this set. The robustness-checking problem consists of assessing, given a set of classifiers and a labelled data set, whether there exists a randomized attack that induces a certain expected loss against all classifiers. We show the NP-hardness of the problem and provide an upper bound on the number of attacks that is sufficient to form an optimal randomized attack. These results provide an effective way to reason about the robustness of a classifier ensemble. We provide SMT and MILP encodings to compute optimal randomized attacks or prove that there is no attack inducing a certain expected loss. In the latter case, the classifier ensemble is provably robust. Our prototype implementation verifies multiple neural-network ensembles trained for image-classification tasks. The experimental results using the MILP encoding are promising both in terms of scalability and the general applicability of our verification procedure.

Authors: Dennis Gross, Nils Jansen, Guillermo A. Pérez, Stephan Raaijmakers

Arxiv: https://arxiv.org/abs/2005.05587

Published: ATVA 2020 The 18ᵗʰ International Symposium on Automated Technology for Verification and Analysis

Adversarial Patch Camouflage against Aerial Detection

Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. The traditional way of hiding military assets from sight is camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. An alternative type of camouflage is the direct misleading of automatic object detectors. Recently, it has been observed that small adversarial changes applied to images of the object can produce erroneous output by deep learning-based detectors. In particular, adversarial attacks have been successfully demonstrated to prohibit person detections in images, requiring a patch with a specific pattern held up in front of the person, thereby essentially camouflaging the person for the detector. Research into this type of patch attacks is still limited and several questions related to the optimal patch configuration remain open.
This work makes two contributions. First, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance, where the patch is laid on top of large military assets, camouflaging them from automatic detectors running over the imagery. The patch can prevent automatic detection of the whole object while only covering a small part of it. Second, we perform several experiments with different patch configurations, varying their size, position, number and saliency. Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery.

Authors: Ajaya Adhikari, Richard den Hollander, Ioannis Tolios, Michael van Bekkum, Anneloes Bal, Stijn Hendriks, Maarten Kruithof, Dennis Gross, Nils Jansen, Guillermo Pérez, Kit Buurman, Stephan Raaijmakers

Arxiv: https://arxiv.org/abs/2008.13671

Published: SPIE Security + Defence 2020 Digital Forum 21

In the news:

An Introduction to Secure Machine Learning

Machine Learning has created a seismic shift in how we think about software engineering and building things in general. Instead of asking how we should solve things ourselves, we dream of being able to just point at the data and let probabilistic systems make an objectively accurate decision for us.

Lately, there has been a growing conversation about the various risks inherent in these sorts of models and how we should be mitigating against them. The field of secure machine learning attempts to address this problem. The idea in its most basic form is to act proactively by putting yourself in the mindset of a criminal and ‘hacking’ into your own machine learning systems to identify problems, weak spots, or potential back doors before an actual criminal can do the same.  After reading this book, you will understand the fundamentals and how you can dive deeper into the field of secure machine learning.