Please use this identifier to cite or link to this item: doi:10.22028/D291-38784
Title: Analyzing neural network behavior through deep statistical model checking
Author(s): Gros, Timo P.
Hermanns, Holger
Hoffmann, Jörg
Klauck, Michaela
Steinmetz, Marcel
Language: English
Title: International Journal on Software Tools for Technology Transfer
Publisher/Platform: Springer Nature
Year of Publication: 2022
Free key words: Statistical model checking
Neural networks
Learning
Verification
Scalability
DDC notations: 004 Computer science, internet
Publikation type: Journal Article
Abstract: Neural networks (NN) are taking over ever more decisions thus far taken by humans, even though verifiable system-level guarantees are far out of reach. Neither is the verification technology available, nor is it even understood what a formal, meaningful, extensible, and scalable testbed might look like for such a technology. The present paper is an attempt to improve on both the above aspects. We present a family of formal models that contain basic features of automated decision-making contexts and which can be extended with further orthogonal features, ultimately encompassing the scope of autonomous driving. Due to the possibility to model random noise in the decision actuation, each model instance induces a Markov decision process (MDP) as verification object. The NN in this context has the duty to actuate (near-optimal) decisions. From the verification perspective, the externally learnt NN serves as a determinizer of the MDP, the result being a Markov chain which as such is amenable to statistical model checking. The combination of an MDP and an NN encoding the action policy is central to what we call “deep statistical model checking” (DSMC). While being a straightforward extension of statistical model checking, it enables to gain deep insight into questions like “how high is the NN-induced safety risk?”, “how good is the NN compared to the optimal policy?” (obtained by model checking the MDP), or “does further training improve the NN?”. We report on an implementation of DSMC inside the Modest Toolset in combination with externally learnt NNs, demonstrating the potential of DSMC on various instances of the model family, and illustrating its scalability as a function of instance size as well as other factors like the degree of NN training.
DOI of the first publication: 10.1007/s10009-022-00685-9
URL of the first publication: https://link.springer.com/article/10.1007/s10009-022-00685-9
Link to this record: urn:nbn:de:bsz:291--ds-387842
hdl:20.500.11880/34956
http://dx.doi.org/10.22028/D291-38784
ISSN: 1433-2787
1433-2779
Date of registration: 20-Jan-2023
Faculty: MI - Fakultät für Mathematik und Informatik
Department: MI - Informatik
Professorship: MI - Prof. Dr. Holger Hermanns
MI - Prof. Dr. Jörg Hoffmann
Collections:SciDok - Der Wissenschaftsserver der Universität des Saarlandes

Files for this record:
File Description SizeFormat 
s10009-022-00685-9.pdf3,25 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons