A comprehensive catalog of computational systems and their physical realizations
substrate
determinism
exactness
operation
model
substrate
determinism
exactness
model
47 systems
Antikythera mechanism
f(x) = astronomical positions, eclipse prediction, Metonic calendar (multi-cycle gear ratios)
A hand-cranked bronze gearwork device built around 150–100 BC — the oldest known analog computer. Turning a single input crank advances 37 meshing gears whose tooth-count ratios encode the periods of the Sun, Moon, and planets. A differential gear (rediscovered in the 16th century) models the Moon's elliptical speed variation. Front dials show the zodiac position of Sun and Moon and display lunar phase via a half-silvered sphere; rear spiral dials track the 19-year Metonic cycle (235 lunations), the 18-year Saros eclipse cycle, and the 4-year Olympiad. Setting a date predicts eclipses and planetary positions decades ahead. Speed: instantaneous (gears turn as fast as the crank). Capacity: ~10 astronomical cycles simultaneously; eclipse prediction decades in advance.
Belousov-Zhabotinsky (BZ) reaction computer
f(x) = boolean logic / reaction-diffusion computation (via chemical wave collisions)
The BZ reaction is an oscillating chemical system that produces propagating excitation waves in a thin layer of reagent (typically ferroin or ruthenium catalyst in acidified bromate/malonate). Signals are encoded as wave fronts; the interaction of two colliding wave fragments implements logic at the collision site. Annihilation corresponds to AND; a wave passing through unimpeded corresponds to OR. Adamatzky demonstrated NOT, OR, AND gates in fixed channel geometries. A light-sensitive variant (with ruthenium catalyst) allows gates to be programmed by illumination patterns. A 2024 Nature Communications paper demonstrated a hybrid digital-chemical programmable array. Speed: ~1-10 mm/min wave propagation; seconds to minutes per gate. Capacity: small logic circuits; limited by wave-front geometry and reagent lifetime.
Billiard-ball computer
f(x) = reversible boolean logic (Fredkin gate)
Proposed by Fredkin & Toffoli (1982). Balls travel on paths representing wires; presence/absence of a ball encodes a bit. Collisions at path intersections implement logic gates. Logically and thermodynamically reversible — no information is destroyed. Speed: nanoseconds to microseconds (ball velocity dependent). Capacity: arbitrary boolean circuits (theoretically universal).
Biological brain
f(x) = general intelligence / perception, memory, reasoning, motor control
The human brain contains ~86 billion neurons connected by ~10¹⁵ synapses. Each neuron integrates thousands of synaptic inputs and fires a spike when its membrane potential crosses threshold — a leaky integrate-and-fire operation. Computation is massively parallel, spike-coded, and energy-efficient at ~20 W total. Synaptic weights are plastic: Hebbian learning and spike-timing-dependent plasticity (STDP) modify connection strengths in response to activity, implementing online learning with no separate training phase. The brain solves tasks — scene understanding, language, planning — that remain beyond engineered systems at equivalent energy budgets. Unlike every other entry, the substrate is also the substrate of the observer. Speed: ~100 Hz spike rate per neuron; millisecond reaction times; years of learning. Capacity: ~86 billion neurons; ~10¹⁵ synapses; ~20 W; general-purpose cognition.
Boson sampler
f(x) = sampling from the permanent of a unitary matrix (classically #P-hard)
Identical single photons enter an m-mode linear optical network (beam splitters and phase shifters implementing a unitary U). Detectors at the outputs sample from a distribution whose probabilities are proportional to |Perm(U_S)|² — the squared permanent of submatrices of U — a quantity believed to be classically intractable to compute. Aaronson & Arkhipov (2011) proved that an efficient classical simulation would collapse the polynomial hierarchy. The device does not solve a user-defined optimization problem; rather, it demonstrates quantum advantage on a specific sampling task. Gaussian boson sampling (GBS) variants use squeezed-light inputs and have been demonstrated at scale (Jiuzhang, 2020). Speed: nanoseconds per sample (photon transit time through chip). Capacity: 53+ photons demonstrated (Jiuzhang); quantum advantage claimed for n≥50.
Coherent Ising machine (OPO network)
f(x) = Ising Hamiltonian ground state / combinatorial optimization (MAX-CUT, QUBO)
A network of degenerate optical parametric oscillator (DOPO) pulses circulating in a fiber ring cavity. Each pulse can oscillate in one of two phase states (0 or π), encoding a spin. Measurement-feedback electronics couple the pulses according to the Ising coupling matrix programmed by the user. As the pump power increases past threshold, the network undergoes bifurcation and settles into a low-energy spin configuration. NTT's 2021 system used 100,000 DOPO pulses in a 5-km fiber loop. Unlike classical or quantum annealers, the CIM operates at room temperature and exploits optical coherence rather than thermal or quantum fluctuations. Speed: microseconds per Ising problem instance. Capacity: up to 100,000 spins (NTT 2021); competitive with quantum annealers on dense graphs.
Coupled oscillator network (Kuramoto / XY model)
f(x) = MAX-CUT / graph partitioning (approximate)
A network of identical oscillators — pendula, LC circuits, or CMOS ring oscillators — coupled to their neighbours by springs or resistive links. The Kuramoto model describes how each oscillator's phase evolves under the pull of its neighbours. When the coupling weights encode a graph's edge weights, the system's stable phase configuration minimizes the same energy function as MAX-CUT: oscillators partition into two phase-locked clusters (0° and 180°) that approximately bisect the graph. Implemented in silicon as oscillator-based Ising machines with up to 1440 CMOS nodes; reported within 99% of optimal MAX-CUT on tested benchmarks. Speed: microseconds to milliseconds (oscillator ring-down time). Capacity: graph problems with hundreds to thousands of nodes.
DNA computer (Adleman 1994)
f(x) = Hamiltonian path via strand hybridization
Leonard Adleman's 1994 demonstration solved the directed Hamiltonian path problem using DNA strand hybridization. Cities encoded as DNA sequences, flight connections as complementary strands. Massively parallel biochemical search. Speed: hours to days (biochemical reactions). Capacity: combinatorial search problems (limited by DNA synthesis/sequencing).
DNA strand-displacement computer
f(x) = boolean logic / neural network inference (via hybridization cascades)
Single-stranded DNA molecules in solution compute via toehold-mediated strand displacement: a short single-stranded 'toehold' on a partially double-stranded gate complex allows an input strand to invade, displace, and release an output strand. Presence/absence of a strand encodes a bit. Cascades of these reactions implement AND, OR, NOT, NAND, NOR, XOR, and threshold gates without enzymes or moving parts. Qian and Winfree (2011) demonstrated a four-bit square-root circuit from 130 DNA strands; a subsequent paper (Nature, 2011) realized a 30-node Hopfield neural network entirely in DNA solution. Speed: minutes to hours per logic operation (hybridization kinetics). Capacity: ~100-gate circuits demonstrated; massively parallel (each molecule is a gate).
Differential analyzer
f(x) = solutions to systems of ODEs (via chained mechanical integration)
Built by Vannevar Bush and Harold Hazen at MIT in 1928–1931, the differential analyzer is a general-purpose analog ODE solver. The core component is a wheel-and-disk integrator: a disk rotates at rate proportional to one variable; a wheel resting on the disk at a radial position proportional to a second variable rotates at their product — implementing ∫ y dx mechanically. Multiple integrators are chained via shafts and differential gears to represent higher-order ODEs. A torque amplifier (Bush's key innovation) prevents the tiny friction coupling from loading the computation. The MIT machine solved sixth-order ODEs; later machines solved 18th-order equations. The device is the missing link between the planimeter (single integral) and the fire-control computer (hardwired ODE). Speed: minutes per ODE solution (shaft rotation time). Capacity: up to 18th-order ODEs (later machines); ~3 significant figures.
Diffractive deep neural network (D²NN)
f(x) = neural network inference / image classification (at the speed of light)
A stack of passive, 3D-printed diffraction layers implements a trained neural network entirely in the optical domain. Each layer is a mask with pixel-wise phase or amplitude modulation, trained offline with backpropagation through a differentiable wave-optics model. During inference, light propagates through the layers via diffraction — no active computation occurs. The network function is encoded in the geometry of the passive masks. Lin et al. (2018, Science) demonstrated handwritten-digit classification at terahertz frequencies with 91.75% accuracy. Inference runs at the speed of light with zero dynamic energy consumption beyond the input illumination. Speed: picoseconds (optical propagation through ~cm of layers). Capacity: image classification at THz; scales with aperture area and layer count.
DishBrain (in-vitro neural culture)
f(x) = closed-loop sensorimotor control / game-playing (via biological learning)
~800,000 human iPSC-derived or mouse cortical neurons are plated onto a high-density multi-electrode array (HD-MEA). The DishBrain system (Kagan et al., 2022, Neuron) embeds the culture in a simulated game of Pong: electrode stimulation encodes ball position and side; the recorded neural firing pattern drives paddle movement. Motivated by the free-energy principle — cells prefer predictable stimulation over white noise — the culture learns to rally the ball within five minutes of real-time play. No explicit training algorithm runs; the biology self-organizes. The substrate is neurons-in-a-dish, making this the only entry where the substrate is alive and may be sentient. Speed: minutes to learn; milliseconds per action (neural firing rate). Capacity: closed-loop sensorimotor tasks; ~800,000 neurons, ~22,000 electrodes.
Domino computer
f(x) = boolean logic (AND, OR, NOT)
Standing dominoes propagate a falling signal. Fan-outs split signals, and careful geometry implements AND and OR gates. Signal is one-shot — must reset by standing dominoes again. Speed: ~1 domino per second propagation (~10-50 seconds total). Capacity: single boolean expression evaluation (one-shot).
Galton board (bean machine)
f(x) = Gaussian / binomial distribution
Balls dropped through a triangular array of pegs deflect left or right at each level. The distribution of balls in the output bins converges to a Gaussian as N→∞. Each peg is an independent Bernoulli trial. Speed: minutes to hours (depending on ball count). Capacity: statistical sampling (scales with number of balls).
Gate-based quantum computer
f(x) = unitary quantum computation / quantum algorithms (Shor factoring, Grover search, VQE)
A register of qubits — typically superconducting transmons cooled to ~10 mK — whose state is manipulated by sequences of microwave pulses implementing one- and two-qubit unitary gates. Any computation is a product of these gates, forming a universal gate set. Superposition lets a qubit represent 0 and 1 simultaneously; entanglement correlates qubits non-classically; interference is used to amplify correct answers and cancel wrong ones. Shor's algorithm factors n-bit integers in O(n³) gate operations vs. exponential classically; Grover's algorithm searches an unsorted list in O(√N). Current NISQ (noisy intermediate-scale quantum) devices have 100–1000 physical qubits with limited coherence; fault-tolerant quantum computing requires ~1000 physical qubits per logical qubit. Google's 2019 Sycamore experiment claimed quantum supremacy on a sampling task in 200 seconds vs. ~10,000 years classically. Speed: nanosecond gate times; microseconds coherence (NISQ era). Capacity: 53–1121 physical qubits (current hardware); fault-tolerant QC requires orders of magnitude more.
Hanging chain (catenary)
f(x) = hyperbolic cosine / thrust line
A chain suspended from two fixed points and left to hang under gravity settles into a curve that exactly realizes the hyperbolic cosine. Gaudí used physical catenaries (inverted) to design the arches of the Sagrada Família. Speed: instantaneous (static equilibrium). Capacity: single function evaluation (hyperbolic cosine).
Kelvin tide-predicting machine
f(x) = sum of sinusoids / tidal height (Fourier synthesis)
Designed by Lord Kelvin (William Thomson) in 1872–73, this special-purpose mechanical analog computer performs real-time Fourier synthesis. Each tidal harmonic constituent (M2, S2, N2 …) is represented by a pulley on a crank whose radius sets the amplitude and whose rotation rate is geared to the constituent's period. A single wire threads over all pulleys in series; as a hand-crank advances time, the wire's endpoint traces the sum of all cosines, drawing the predicted tide curve on a paper roll. Kelvin's final version summed 24 harmonic components and could predict a full year of tides in about four hours. Variants were built for the US, India, and other nations and remained in operational use through World War II. Speed: a full year of tidal predictions in ~4 hours of cranking. Capacity: up to 40 harmonic components (later US machines); continuous output.
LEGO mechanical computer
f(x) = arbitrary digital logic / sequential game state
A fully mechanical computer built from LEGO Technic with no electronics. Binary memory is stored as lever positions on a rotating drum (rod logic); a read/write head flips levers to write bits and senses them pneumatically on readback. A joystick translates direction inputs into pneumatic signals that pass through a mechanical filter preventing illegal moves, then drive a 16×16 push-rod display. Demonstrated running the game Snake entirely in hardware. Speed: ~1 Hz game-tick (limited by pneumatic signal propagation through tubing). Capacity: 16×16 display state + snake tail buffer (tens of bits of working memory).
Liquid marble computer
f(x) = boolean logic / reversible gates (AND, XOR, OR, NOT, Toffoli, Fredkin)
Liquid marbles are millimetre-scale droplets coated with hydrophobic powder that makes them roll freely without wetting surfaces. Computation is collision-based: two marbles directed at an intersection merge if their relative speed exceeds ~0.29 m/s (AND = 1, carry output) and rebound below that threshold (AND = 0, separate outputs). The three output trajectories encode AND and XOR simultaneously, forming a half-adder in a single interaction gate. By controlling routing channels and gate geometry, all classical gates (AND, OR, NOT, NAND, NOR, XOR) and the reversible Toffoli and Fredkin gates can be constructed. The Fredkin gate conserves marble count — no information is destroyed — making this a physical substrate for reversible and potentially thermodynamically efficient computing. Speed: ~0.1–1 s per gate (marble travel time at cm/s speeds). Capacity: gate-level; multi-cycle datapath demonstrated in simulation.
MEMS accelerometer
f(x) = Newton's second law (a = F/m) — continuous analog acceleration measurement
A microfabricated proof mass (typically silicon, ~1 μg) suspended by folded-beam springs. Under acceleration, the mass displaces by x = ma/k (Hooke's law + Newton's second law in equilibrium). Displacement is read by capacitive sensing: the mass carries interdigitated comb fingers whose capacitance changes by ΔC ∝ x ∝ a. The device is a physical analog computer that continuously divides force by spring constant — realizing a = F/m at the hardware level without arithmetic. MEMS gyroscopes extend this to Coriolis-effect angular-rate sensing, and IMUs combine three-axis accelerometers and gyroscopes to integrate trajectory in 3D. Found in every smartphone, airbag controller, and inertial navigation unit. Speed: continuous real-time output (bandwidth typically 1 Hz – 10 kHz). Capacity: single scalar (or 3-axis) acceleration; sub-μg resolution in precision variants.
MONIAC (Phillips hydraulic computer)
f(x) = Keynesian macroeconomic equilibrium (ODE system)
Built by Bill Phillips (1949). Water flows through tanks and pipes representing economic sectors — income, consumption, taxation, investment. Flow rates encode economic quantities. The system settles into equilibrium representing GDP balance. 14 machines were built. Speed: minutes to hours (hydraulic equilibration). Capacity: ~10-20 economic variables (limited by physical plumbing).
Marble computer
f(x) = binary arithmetic / boolean logic
Gravity-fed marble runs with rocker/seesaw gates implement binary arithmetic and logic operations. One marble = 1 bit. The rocker flips state on each pass, implementing half-adders and logic gates. The Digi-Comp II (1965) is the canonical plastic educational design, while K'NEX construction sets allow modular prototyping of custom layouts. Speed: ~1-10 seconds per operation (marble transit time). Capacity: 3-8 bit operations (modular, expandable).
Mechanical fire-control computer
f(x) = ballistic trajectory / gun bearing and elevation (multivariate real-time ODE)
Electromechanical analog computers installed on WWII-era warships (e.g. the US Navy Mark 1) continuously computed the correct bearing and elevation for naval guns from up to 25 live inputs: target range, target bearing, own-ship speed and course, wind speed, shell muzzle velocity, and more. Seven classes of mechanism — shafts, gears, cams, differentials, component solvers, integrators, and multipliers — were combined to solve the fire-control problem in real time. Speed: continuous real-time (output updated as fast as inputs change). Capacity: ~25 input variables → 2 output variables (bearing, elevation).
Mechanical gyroscope
f(x) = time-integral of angular velocity (orientation tracking)
A spinning rotor mounted in gimbals conserves angular momentum. Any external torque causes precession perpendicular to both the spin axis and the applied torque — rather than tilting directly. By reading gimbal angles, the device outputs the accumulated rotation of the platform relative to inertial space. It is a physical integrator: angular velocity in → angle out, with no arithmetic required. Inertial navigation systems chain three orthogonal gyroscopes with three accelerometers; double-integrating the accelerometer outputs (in the gyroscope-maintained inertial frame) gives position. Mechanical gyros guided Apollo missions and ICBM warheads; they have largely been replaced by MEMS and ring-laser gyroscopes but remain the conceptual anchor of inertial navigation. Speed: continuous real-time (spin-up time seconds to minutes). Capacity: 3-axis orientation; drift accumulates over time (arcseconds per hour in precision instruments).
Memristive Hopfield network optimizer
f(x) = optimization via chaotic annealing / transient dynamics
Memristive circuits implementing Hopfield network topology where the intrinsic nonlinearity of memristors creates transient chaotic annealing processes. The chaotic dynamics enable escape from local minima for solving optimization problems like Max-Cut and continuous function optimization.
Memristor crossbar
f(x) = analog matrix-vector multiplication
Crossbar arrays of memristors (memory resistors) perform matrix-vector operations in analog. Voltages applied to rows, currents collected from columns. Resistance values encode matrix elements. Enables in-memory computing for neural network inference. Speed: nanoseconds (electrical propagation). Capacity: large matrix operations (scales with crossbar size).
Neuromorphic chip (Intel Loihi / IBM TrueNorth)
f(x) = spiking neural network computation
Silicon chips that mimic neural computation using spiking neurons and synaptic connections. Intel Loihi and IBM TrueNorth implement event-driven, asynchronous processing with on-chip learning capabilities. Speed: microseconds (spike propagation). Capacity: millions of neurons (parallel event-driven processing).
Op-amp analog computer
f(x) = ODE integration via Kirchhoff's laws
Operational amplifiers configured as integrators, adders, and multipliers solve differential equations in real-time. Voltages represent variables, circuit topology encodes the equation structure. Classical electronic analog computation. Speed: real-time (microseconds to seconds). Capacity: systems of ODEs (~10-100 variables typical).
Optical correlator (4f / VanderLugt filter)
f(x) = cross-correlation / matched filtering (pattern detection in O(1) optical time)
A 4f lens system consists of two lenses separated by twice their focal length with a holographic or spatial-light-modulator (SLM) filter at the shared Fourier plane. The first lens computes the Fourier transform of the input image; the filter multiplies by the complex conjugate of the reference pattern's Fourier transform; the second lens inverse-Fourier-transforms the product, yielding the cross-correlation at the output plane. This implements a matched filter — the canonical operation for detecting a known pattern in a cluttered scene — in a single optical pass at the speed of light, regardless of image size. The system realizes the convolution theorem physically: FT(f⋆g) = F*·G. Used in optical character recognition, fingerprint identification, and radar pulse compression. Speed: picoseconds to nanoseconds (optical propagation through ~cm path). Capacity: full 2D cross-correlation of megapixel images in a single pass; filter change requires SLM reprogramming.
Photonic integrated circuit (silicon photonics)
f(x) = matrix-vector multiplication / unitary linear transforms (for neural network inference)
Arrays of Mach-Zehnder interferometers (MZIs) and microring resonators on a silicon chip implement programmable unitary matrices in the optical domain. Light encodes values as amplitude or phase; passing through a mesh of beam-splitters (MZIs) with tunable phase shifters multiplies an optical input vector by the weight matrix in a single forward pass. Because photons travel at c and interference is intrinsically parallel, a single matrix-vector multiply completes in picoseconds with energy consumption set only by modulation and detection, not arithmetic logic. MIT demonstrated a photonic processor running all key deep-learning operations on-chip. Neuromorphic silicon photonics has achieved 50 GHz tiled matrix multiplication. Speed: picoseconds per matrix-vector multiply; 50 GHz demonstrated. Capacity: 64×64 to 512×512 unitary matrices on current chips; ~4-6 bit precision.
Physarum polycephalum (slime mold)
f(x) = Steiner tree / shortest transport network (approximate)
The plasmodial slime mold extends filaments toward nutrient sources and progressively reinforces paths that carry more flow, pruning inefficient routes. Toshiyuki Nakagaki showed it reproduces the Tokyo rail network topology. Speed: hours to days (biological growth/optimization). Capacity: network optimization problems with ~10-100 nodes.
Planimeter
f(x) = area enclosed by an arbitrary plane curve (∮ via Green's theorem)
A two-bar linkage with a tracing point at one end and a measuring wheel mounted on the tracer arm. When the operator traces the boundary of an arbitrary shape, the wheel rolls only in the direction perpendicular to the tracer arm — the component encoding the integrand of Green's theorem (∮ x dy). The total wheel rotation equals the enclosed area regardless of path geometry. The polar planimeter (Amsler, 1854) requires no straight guide rail and works anywhere on a flat surface. Precision versions routinely achieve 0.1% accuracy. Historically used in cartography, engineering drawing, and medical imaging to measure irregular areas from printed plans. Speed: seconds to minutes per area measurement (tracing speed). Capacity: single scalar output (area); arbitrary curve complexity.
Pneumatic logic (Coanda-effect fluidics)
f(x) = boolean logic (AND, OR, NOT, NOR) via wall-attachment bistability
A jet of air entering a Y-shaped channel naturally attaches to one wall (the Coandă effect) and locks into that state by low-pressure recirculation. A small control jet on the opposite side provides enough momentum to switch the main jet to the other wall — bistable flip-flop behaviour with no moving parts. AND, OR, NOT, and NOR gates are realized by channel geometry; outputs fan out by splitting the attached jet. Developed in the early 1960s at the Harry Diamond Laboratories (Bowles, Gottron) and widely used in industrial control until PLCs displaced them. Inherently radiation-hardened (no electronics) and tolerant of dust and oil. MTBFs of 25,000–50,000 hours reported. Speed: milliseconds per gate switching (air transit time). Capacity: arbitrary boolean circuits; industrial systems ran thousands of gates.
Quantum and quantum-inspired annealers
f(x) = Ising model energy minimization / QUBO optimization
Quantum and quantum-inspired systems for solving combinatorial optimization problems through annealing processes. Includes true quantum annealers (D-Wave) using superconducting qubits and quantum-inspired CMOS implementations (Fujitsu, Toshiba, Hitachi) that simulate annealing dynamics. Speed: microseconds to seconds. Capacity: hundreds to thousands of variables.
Quantum gate computer (superconducting qubits)
f(x) = unitary transformations / quantum algorithms
Superconducting qubits manipulated by microwave pulses to perform unitary operations. Quantum gates like Hadamard, CNOT, and phase gates enable quantum algorithms such as Shor's factoring and Grover's search. Speed: nanoseconds to microseconds (gate operations). Capacity: exponential in qubit count (theoretical universal quantum computation).
Repressilator (synthetic gene oscillator)
f(x) = limit-cycle oscillation / biological clock (via negative-feedback transcription loop)
Elowitz & Leibler (2000, Nature) constructed a synthetic oscillator in E. coli from three mutual repressor genes wired in a ring: LacI represses tetR; TetR represses cI; CI represses lacI. No gene product directly activates its own production, yet the circular negative feedback drives sustained oscillations in protein concentration with a period of ~150 minutes. The repressilator is a physical implementation of a relaxation oscillator: the mathematical operation is sustained limit-cycle dynamics, the same function realized by a CMOS ring oscillator or a Van der Pol circuit — but in living cells. Demonstrates that genetic regulatory networks can be designed as analog computing substrates, encoding functions (oscillation, bistability, logic) in DNA sequence. Speed: ~150 min period (transcription/translation kinetics). Capacity: single-frequency oscillator; frequency tunable by changing promoter strength or mRNA degradation rate.
Reservoir computer
f(x) = temporal pattern recognition / dynamical system computation
Fixed nonlinear dynamical system (reservoir) coupled to a trained linear readout layer. Input drives the reservoir dynamics, output layer learns to extract desired computations. Echo state networks and liquid state machines are implementations. Speed: depends on reservoir substrate (microseconds to seconds). Capacity: temporal sequence processing (scales with reservoir size).
Resistive sheet (Teledeltos) Laplace solver
f(x) = solutions to Laplace's equation ∇²φ = 0 (electrostatics, heat, groundwater flow)
A sheet of Teledeltos — carbon-coated resistive paper with ~6 kΩ/square resistivity — conducts current that obeys the same Laplace equation as electrostatic potential, steady-state heat conduction, inviscid fluid flow, and Darcy groundwater seepage. Boundary conditions are imposed by painting silver-loaded conductive ink in the shape of conductors or flow boundaries; a voltage is applied across them. A probe voltmeter scanned over the sheet reads the potential field directly. Complex 2D geometries that would require days of PDE numerics can be mapped in hours. Widely used from the 1930s through the 1970s in capacitor design, transformer core analysis, dam seepage studies, and aircraft aerodynamics before finite-element codes displaced it. Speed: hours for full field map (manual probe scanning); boundary setup in minutes. Capacity: 2D scalar field on arbitrary domain geometry; ~1-2% accuracy.
Rubber-band Steiner tree
f(x) = Euclidean Steiner minimum tree (approximate)
Elastic bands stretched between pins hammered into a board relax under tension to a state of minimum total length. Because each band pulls with a force proportional to its extension, the equilibrium configuration satisfies the equal-angles condition at every interior junction — the defining property of a Steiner tree. The result is the shortest network connecting all pins, approximating the solution to the NP-hard Euclidean Steiner tree problem. The mechanism is combinatorially distinct from the soap-film Steiner tree (Plateau's problem in 2-D) because the topology of junctions is fixed by the discrete wiring of the bands, not by a continuous surface. Speed: instantaneous (elastic equilibration). Capacity: Steiner tree for ~5-20 pins (limited by physical layout).
Simulated annealing (thermal)
f(x) = argmin of energy / cost landscape
A physical system coupled to a heat bath at slowly decreasing temperature explores its energy landscape. At high temperature it escapes local minima; as T→0 it settles into a global minimum — if cooling is slow enough. Speed: minutes to hours (depends on cooling schedule). Capacity: global optimization problems (scales exponentially with problem size).
Slide rule
f(x) = logarithm, multiplication, division, roots
Logarithmic scales engraved on sliding rules allow multiplication by physical addition of lengths (log a + log b = log ab). Precision is bounded by engraving quality and human reading resolution — typically 3 significant figures. Speed: seconds (human reading time). Capacity: single arithmetic operation (3 significant figures).
Soap film
f(x) = minimal surface (Plateau's problem)
A soap film spanning a closed wire boundary settles into the surface of minimum area — the solution to Plateau's problem. For two parallel rings it realizes a catenoid. Can approximate Steiner trees for planar point sets. Speed: seconds to minutes (surface tension equilibration). Capacity: continuous optimization over infinite-dimensional space.
Spaghetti sort
f(x) = total ordering of positive reals (sorting) in O(n) physical time
Cut n spaghetti strands to lengths proportional to the n values to be sorted. Gather them loosely in a fist and lower them vertically onto a flat table so all strands stand upright. Lower a flat hand from above: the first strand it touches is the maximum. Remove it, record the value, repeat — each contact extracts the next-largest in O(1) time. Preparing the rods is O(n); the n extractions are O(n); the whole sort is O(n) in physical time, exploiting the parallel nature of gravity and contact. Introduced by A. K. Dewdney in Scientific American. Illustrates how physical parallelism can circumvent the Ω(n log n) comparison-sort lower bound by using a non-comparison primitive (contact with a plane). Speed: O(n) physical steps; each step is constant time. Capacity: n positive real values; precision limited by ability to cut and measure strand lengths.
Thermodynamic computer
f(x) = sampling from Boltzmann distributions
Uses thermal noise in analog circuits to sample from Boltzmann distributions. Thermal fluctuations provide natural randomness that follows statistical mechanics principles. The Normal Computing SDE (Stochastic Differential Equation) approach leverages this thermal noise for computation. Speed: microseconds to milliseconds (thermal equilibration). Capacity: probabilistic sampling problems (scales with circuit complexity).
Thermodynamic computer (Normal Computing SPU)
f(x) = probabilistic sampling / linear algebra via thermal equilibration
Analog physics-based computers using thermodynamic principles for computation. Normal Computing's Stochastic Processing Unit (SPU) uses RLC circuits as unit cells with all-to-all coupling via switched capacitances, natively simulating Langevin/Ornstein-Uhlenbeck dynamics for probabilistic reasoning, generative design, and scientific computing.
Water (fluidic) computer
f(x) = binary addition / boolean logic (AND, XOR)
Water levels in vessels encode binary digits; a siphon and slow drain combine to implement AND and XOR in a single cup-and-tube unit. A filled cup is a 1, an empty cup a 0. When two cups feed one container the siphon trips (AND = carry), while the remainder leaks out the XOR drain. These half-adder cells chain into a multi-bit ripple adder. No moving parts beyond the water itself. Speed: seconds to minutes per bit (gravity-driven flow). Capacity: 4-bit addition demonstrated; theoretically scalable.
Watt centrifugal governor
f(x) = proportional speed regulation (continuous set-point tracking via negative feedback)
Two steel balls are mounted on hinged arms linked to a rotating vertical shaft driven by the engine. As engine speed increases, centrifugal force swings the balls outward and upward; through a collar linkage this motion partially closes the steam throttle, reducing power and slowing the engine. As speed falls the balls drop, the throttle reopens, and the cycle repeats. The system finds equilibrium where centrifugal force exactly balances gravity — and that equilibrium corresponds to the desired set speed. James Watt adapted this in 1788 from a windmill governor; James Clerk Maxwell's 1868 paper 'On Governors' analysed it as the first mathematical treatment of feedback control. The device is a physical analog computer that continuously solves the equation: throttle = f(ω − ω_set). Speed: continuous real-time (mechanical response time ~0.1–1 s). Capacity: single-variable set-point control; extends to multi-variable with additional linkages.