Bitte benutzen Sie diese Referenz, um auf diese Ressource zu verweisen: doi:10.22028/D291-40450
Titel: Building bridges for better machines : from machine ethics to machine explainability and back
VerfasserIn: Speith, Timo
Sprache: Englisch
Erscheinungsjahr: 2023
Kontrollierte Schlagwörter: applied ethics
artificial intelligence
explanation
ethics
Freie Schlagwörter: explainable artificial intelligence
machine ethics
machine explainability
interpretability
DDC-Sachgruppe: 004 Informatik
100 Philosophie
Dokumenttyp: Dissertation
Abstract: Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.
Link zu diesem Datensatz: urn:nbn:de:bsz:291--ds-404509
hdl:20.500.11880/36396
http://dx.doi.org/10.22028/D291-40450
Erstgutachter: Nortmann, Ulrich
Tag der mündlichen Prüfung: 1-Jun-2023
Datum des Eintrags: 7-Sep-2023
Drittmittel / Förderung: DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent Systems
Fördernummer: DFG: 389792660;VolkswagenStiftung: AZ 95143, AZ 9B830, AZ 98509, AZ 98514
Fakultät: P - Philosophische Fakultät
Fachrichtung: P - Philosophie
Professur: P - Prof. Dr. Ulrich Nortmann
Sammlung:SciDok - Der Wissenschaftsserver der Universität des Saarlandes

Dateien zu diesem Datensatz:
Datei Beschreibung GrößeFormat 
Dissertation_UdS_Speith.pdf9,38 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons