Please use this identifier to cite or link to this item: doi:10.22028/D291-40450
Title: Building bridges for better machines : from machine ethics to machine explainability and back
Author(s): Speith, Timo
Language: English
Year of Publication: 2023
SWD key words: applied ethics
artificial intelligence
explanation
ethics
Free key words: explainable artificial intelligence
machine ethics
machine explainability
interpretability
DDC notations: 004 Computer science, internet
100 Philosophy
Publikation type: Dissertation
Abstract: Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.
Link to this record: urn:nbn:de:bsz:291--ds-404509
hdl:20.500.11880/36396
http://dx.doi.org/10.22028/D291-40450
Advisor: Nortmann, Ulrich
Date of oral examination: 1-Jun-2023
Date of registration: 7-Sep-2023
Third-party funds sponsorship: DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent Systems
Sponsorship ID: DFG: 389792660;VolkswagenStiftung: AZ 95143, AZ 9B830, AZ 98509, AZ 98514
Faculty: P - Philosophische Fakultät
Department: P - Philosophie
Professorship: P - Prof. Dr. Ulrich Nortmann
Collections:SciDok - Der Wissenschaftsserver der Universität des Saarlandes

Files for this record:
File Description SizeFormat 
Dissertation_UdS_Speith.pdf9,38 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons