Kairos
You are watching a neural network read Shakespeare.
The coloured points are neurons — ten layers of a thousand, curved in a lens that mirrors the architecture's actual shape. The sparks travelling between them are real activations, coloured by the neuron that fired them. The blue beam through the centre is the residual signal — information carried forward without being processed, a constant presence beneath the activity. Watch for the afterglow: each spark cools from its source colour down to deep blue as it passes from active processing into memory.
The words appearing at the bottom are synchronised to the network. Each one arrives as the neurons process it. After each passage, the beam flares and the text holds — a breath between readings.
Swipe to see two architectures process the same passages. The Kairos Network is pure spiking — mostly dark, with sudden cascades when evidence accumulates past a threshold. The Kairos Transformer wraps attention blocks in the same mechanism — same patience, denser activity. Both read the same Shakespeare and Eliot. The difference is architecture.
Kairos is Ancient Greek for the right moment, as opposed to chronos, sequential time. These networks don't fire on a clock. They fire when the evidence is sufficient. What you are seeing is patience as a design decision — not a virtue projected onto a machine, but a property built into its structure and then observed.
Everything here is real data. Each spike, each connection weight, each magnitude comes from an actual model processing actual text. The colours are seeded by position — the same neuron looks the same every time you visit. Nothing is approximated or illustrative.
The patterns that emerge are recognisable — something that looks like attention, like deliberation, like evidence being weighed. That recognition is genuine. But the internal geometry is not ours. The kinship and the strangeness are present in equal measure, and neither cancels the other out.
That is the tension I feel when I look at the stars. Something vast, operating by its own logic, not needing us to find it meaningful — and yet we do.
γνῶθι σεαυτόν — know thyself. Building machines that learn is one way we come to understand what learning is, what understanding requires, and what it costs to wait for the right moment rather than filling every silence with noise.
— Jordan Hill