Chapter 5

Backpropagation, derived from scratch

Backpropagation is just the chain rule (chapter 2) applied to a neural network (chapter 3) so we can run gradient descent (chapter 4). This chapter walks through every line of the derivation, then maps it directly to the source code in this repo. No black boxes.

1

Setup β€” what we cached during the forward pass

For a network with layers, the forward pass left us with:

  • β€” the input batch.
  • β€” the pre-activation at each layer.
  • β€” the post-activation.
  • β€” a single scalar, computed from and the targets .

The goal of backprop is to find:

2

Define a helper: the per-layer error Ξ΄

We'll keep things tidy by introducing a shorthand for "how much the loss changes if I nudge the pre-activation of layer ":

3

Compute Ξ΄ at the output layer

Start at the last layer. By the chain rule (chapter 2), differentiating through the activation function gives:

4

Push Ξ΄ backward through the layers

For an interior layer, applying the chain rule again gives:

Apply this iteratively from down to and you have at every layer. That was the hard part.

5

Read the parameter gradients off Ξ΄

Once we have , the gradients we actually wanted are:

Weight gradient.
Bias gradient β€” sum Ξ΄ down the batch dimension.
6

The complete algorithm

  1. Forward. Compute every and . Compute the loss.
  2. Output Ξ΄. Either or β€” if your output is softmax + categorical CE β€” the fused shortcut from step 3.
  3. Backward sweep. For , compute:
  4. Parameter gradients. , .
  5. Update. Apply the optimizer (chapter 4) using each gradient.
7

Code, line by line

Here are the key lines from src/lib/nn/network.ts in this repo. Compare each line to a step from the derivation above:

// Step 1 β€” Forward pass β€” Network.forward
let out = x;
for (const layer of this.layers) out = layer.forward(out);

// Step 2 β€” Output Ξ΄ β€” fused softmax+CCE branch
if (fused) {
  delta = (yPred - yBatch) / N;          // δ⁽ᴸ⁾ = (1/N)(Ε· βˆ’ y)
} else {
  delta = lossFn.backward(yBatch, yPred); // βˆ‚L/βˆ‚A⁽ᴸ⁾
}

// Step 3+4+5 β€” Backward sweep + parameter gradients β€” Layer.backward
const dZ = fusedSoftmax
  ? dA                                    // already Ξ΄ if fused
  : this.activation.backward(dA, this.lastZ); // dA βŠ™ Οƒ'(z)

this.lastGradW = matmul(transpose(this.lastInput), dZ); // (A⁽ℓ⁻¹⁾)α΅€ Β· Ξ΄
this.lastGradB = sumRows(dZ);                          // Ξ£ Ξ΄ over batch
return matmul(dZ, transpose(this.weights));            // Ξ΄ Β· Wα΅€ β†’ upstream

// Step 6 β€” Update β€” Layer.applyGradients calls the optimizer.
8

Verify it works β€” the gradient check

A small but powerful sanity check: pick one parameter, perturb it by a tiny , and approximate its derivative numerically by running two forward passes: