Simplifying Transformer Blocks: Block Layouts

Written by autoencoder | Published 2024/06/19
Tech Story Tags: deep-learning | transformer-architecture | simplified-transformer-blocks | neural-network-efficiency | deep-transformers | signal-propagation-theory | neural-network-architecture | transformer-efficiency

TLDRReparameterising value and projection parameters in linear layers via the duality between downweighted residuals and restricted updates optimizes learning rates and model performance.via the TL;DR App

Authors:

(1) Bobby He, Department of Computer Science, ETH Zurich (Correspondence to: [email protected].);

(2) Thomas Hofmann, Department of Computer Science, ETH Zurich.

Table of Links

Abstract and Introduction

Related Work

Preliminaries

Simplifying Transformer Blocks

Further Experimental Analysis

Discussion, Reproducibility Statement, Acknowledgements and References

A Duality Between Downweighted Residual and Restricting Updates In Linear Layers

B Block Layouts

C Additional Experiments

D Implementation Details

B BLOCK LAYOUTS

In Fig. 9 and Fig. 10 we show the layouts of our SAS block (Sec. 4.2) and parallel SAS-P block (Sec. 4.3). These are the equivalent plots to the layouts in Fig. 1. Mathematically, our SAS attention sub-block computes (in the notation of Eq. (2)):

This paper is available on arxiv under CC 4.0 license.


Written by autoencoder | Research & publications on Auto Encoders, revolutionizing data compression and feature learning techniques.
Published by HackerNoon on 2024/06/19