Paper accepted at TMLR!


Paper accepted at TMLR!

Paper on Interpretable Additive Tabular Transformer Networks got accepted at TMLR!

Published on May 14, 2024 by Data Science @ LMU Munich

tmlr papers

0 min READ

  • Authors: Anton Frederik Thielmann, Arik Reuter, Thomas Kneib, David Rügamer, Benjamin Säfken
  • Title: Interpretable Additive Tabular Transformer Networks
  • Abstract: Attention based Transformer networks have not only revolutionized Natural Language Processing but have also achieved state-of-the-art results for tabular data modeling. The attention mechanism, in particular, has proven to be highly effective in accurately modeling categorical variables. Although deep learning models recently outperform tree-based models, they often lack a complete comprehension of the individual impact of features because of their opaque nature. In contrast, additive neural network structures have proven to be both predictive and interpretable. Within the context of explainable deep learning, we propose Neural Additive Tabular Transformer Networks (NATT), a modeling framework that combines the intelligibility of additive neural networks with the predictive power of Transformer models. NATT offers inherent intelligibility while achieving similar performance to complex deep learning models. To validate its efficacy, we conduct experiments on multiple datasets and find that NATT performs on par with state-of-the-art methods on tabular data and surpasses other interpretable approaches.