Mutual Information for Explainable Deep Learning of Multiscale Systems

Søren Taverniers, Eric J. Hall, Markos A. Katsoulakis, Daniel M. Tartakovsky

Research output: Working paper/PreprintPreprint

41 Downloads (Pure)


Timely completion of design cycles for multiscale and multiphysics systems ranging from consumer electronics to hypersonic vehicles relies on rapid simulation-based prototyping. The latter typically involves high-dimensional spaces of possibly correlated control variables (CVs) and quantities of interest (QoIs) with non-Gaussian and/or multimodal distributions. We develop a model-agnostic, moment-independent global sensitivity analysis (GSA) that relies on differential mutual information to rank the effects of CVs on QoIs. Large amounts of data, which are necessary to rank CVs with confidence, are cheaply generated by a deep neural network (DNN) surrogate model of the underlying process. The DNN predictions are made explainable by the GSA so that the DNN can be deployed to close design loops. Our information-theoretic framework is compatible with a wide variety of black-box models. Its application to multiscale supercapacitor design demonstrates that the CV rankings facilitated by a domain-aware Graph-Informed Neural Network are better resolved than their counterparts obtained with a physics-based model for a fixed computational budget. Consequently, our information-theoretic GSA provides an "outer loop" for accelerated product design by identifying the most and least sensitive input directions and performing subsequent optimization over appropriately reduced parameter subspaces.
Original languageEnglish
Number of pages22
Publication statusPublished - 7 Sept 2020

Publication series

PublisherCornell University


  • cs.LG
  • cs.NA
  • math.NA
  • stat.ML
  • 93B35 (Primary) 68T07, 62R07 (Secondary)


Dive into the research topics of 'Mutual Information for Explainable Deep Learning of Multiscale Systems'. Together they form a unique fingerprint.

Cite this