Bitte benutzen Sie diese Referenz, um auf diese Ressource zu verweisen: doi:10.22028/D291-38739
Titel: XEngine : Optimal Tensor Rematerialization for Neural Networks in Heterogeneous Environments
VerfasserIn: Schuler, Manuela
Membarth, Richard
Slusallek, Philipp
Sprache: Englisch
Titel: ACM Transactions on Architecture and Code Optimization
Bandnummer: 20 (2023)
Heft: 1
Verlag/Plattform: Association for Computing Machinery
Erscheinungsjahr: 2022
Freie Schlagwörter: Rematerialization
integer linear programming
neural networks
memory management
heterogeneous computing
DDC-Sachgruppe: 004 Informatik
Dokumenttyp: Journalartikel / Zeitschriftenartikel
Abstract: Memory efficiency is crucial in training deep learning networks on resource-restricted devices. During backpropagation, forward tensors are used to calculate gradients. Despite the option of keeping those dependencies in memory until they are reused in backpropagation, some forward tensors can be discarded and recomputed later from saved tensors, so-called checkpoints. This allows, in particular, for resource-constrained heterogeneous environments to make use of all available compute devices. Unfortunately, the definition of these checkpoints is a non-trivial problem and poses a challenge to the programmer—improper or excessive recomputations negate the benefit of checkpointing. In this article, we present XEngine, an approach that schedules network operators to heterogeneous devices in low memory environments by determining checkpoints and recomputations of tensors. Our approach selects suitable resources per timestep and operator and optimizes the end-to-end time for neural networks taking the memory limitation of each device into account. For this, we formulate a mixed-integer quadratic program (MIQP) to schedule operators of deep learning networks on heterogeneous systems. We compare our MIQP solver XEngine against Checkmate [12], a mixed-integer linear programming (MILP) approach that solves recomputation on a single device. Our solver finds solutions that are up to 22.5% faster than the fastest Checkmate schedule in which the network is computed exclusively on a single device. We also find valid schedules for networks making use of both central processing units and graphics processing units if memory limitations do not allow scheduling exclusively to the graphics processing unit.
DOI der Erstveröffentlichung: 10.1145/3568956
URL der Erstveröffentlichung: https://doi.org/10.1145/3568956
Link zu diesem Datensatz: urn:nbn:de:bsz:291--ds-387394
hdl:20.500.11880/34902
http://dx.doi.org/10.22028/D291-38739
ISSN: 1544-3973
1544-3566
Datum des Eintrags: 18-Jan-2023
Fakultät: MI - Fakultät für Mathematik und Informatik
Fachrichtung: MI - Informatik
Professur: MI - Prof. Dr. Philipp Slusallek
Sammlung:SciDok - Der Wissenschaftsserver der Universität des Saarlandes

Dateien zu diesem Datensatz:
Datei Beschreibung GrößeFormat 
3568956.pdf2,15 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons