The Math Behind Selective State Space Models

Written by serialization | Published 2024/12/18
Tech Story Tags: deep-learning | transformer-architecture | mamba-model | ai-sequence-modeling | genomics-ai-solutions | latent-state-ai-models | hyena-architecture | structured-state-space-models

TLDRThis section examines the mechanics of Selective SSMs, detailing the discretization process, the role of learnable biases, and how the zero-order hold (ZOH) formulas shape efficient AI recurrences.via the TL;DR App

Authors:

(1) Albert Gu, Machine Learning Department, Carnegie Mellon University and with equal contribution;

(2) Tri Dao, Department of Computer Science, Princeton University and with equal contribution.

Table of Links

Abstract and 1 Introduction

2 State Space Models

3 Selective State Space Models and 3.1 Motivation: Selection as a Means of Compression

3.2 Improving SSMs with Selection

3.3 Efficient Implementation of Selective SSMs

3.4 A Simplified SSM Architecture

3.5 Properties of Selection Mechanisms

3.6 Additional Model Details

4 Empirical Evaluation and 4.1 Synthetic Tasks

4.2 Language Modeling

4.3 DNA Modeling

4.4 Audio Modeling and Generation

4.5 Speed and Memory Benchmarks

4.6 Model Ablations

5 Discussion

6 Conclusion and References

A Discussion: Selection Mechanism

B Related Work

C Mechanics of Selective SSMs

D Hardware-aware Algorithm For Selective SSMs

E Experimental Details and Additional Results

C Mechanics of Selective SSMs

The discretization step size is

where we observe that the parameter can be viewed as a learnable bias and folded into the linear projection. Now applying the zero-order hold (ZOH) discretization formulas:

Thus the final discrete recurrence (2a) is

as desired.

This paper is available on arxiv under CC BY 4.0 DEED license.


Written by serialization | We cover the most cutting edge academic research and expert blog posts on serialization. Also big fans of the Serial pod
Published by HackerNoon on 2024/12/18