11.1

Perceptron & Linear Models

Watch a single perceptron learn a decision boundary. Two inputs connect through weighted edges to a summation node and activation function. See how the decision line shifts as weights update.

LR:0.100
Perceptron Architecture
x1?x2?-0.140.12Σ?b=-0.14stepout?
2D Decision Boundary
000.50.511x1x2Class 0Class 1Decision
Decision boundary: -0.143·x1 + 0.123·x2 + -0.144 = 0
Current Weights
w1
-0.143
w2
0.123
bias
-0.144
Loss (MSE)0.2500
Train to see loss curve
Activation: step
-551
Hard threshold at z=0. Classic perceptron.
1.0x
Epoch0
Accuracy75%
Loss (MSE)0.2500
w1-0.143
w20.123
bias-0.144
LR0.100
Key Concepts
Perceptron Model
  • output = activation(w1·x1 + w2·x2 + b)
  • Linear decision boundary in input space
  • Can only solve linearly separable problems
  • Foundation of all neural networks
Learning Rule
  • Compute error = target - predicted
  • Δw = learning_rate × error × input
  • Update weights to reduce the error
  • Guaranteed convergence if linearly separable
Limitations
  • Cannot solve XOR (Minsky & Papert, 1969)
  • Only linear decision boundaries
  • Single layer = single hyperplane
  • Need multi-layer networks for nonlinear problems
AND Gate

Linearly separable: only (1,1) outputs 1. The perceptron can learn a line separating the classes.