Diffractive deep neural network (D²NN)
Realizes: neural network inference / image classification (at the speed of light)
A stack of passive, 3D-printed diffraction layers implements a trained neural network entirely in the optical domain. Each layer is a mask with pixel-wise phase or amplitude modulation, trained offline with backpropagation through a differentiable wave-optics model. During inference, light propagates through the layers via diffraction — no active computation occurs. The network function is encoded in the geometry of the passive masks. Lin et al. (2018, Science) demonstrated handwritten-digit classification at terahertz frequencies with 91.75% accuracy. Inference runs at the speed of light with zero dynamic energy consumption beyond the input illumination. Speed: picoseconds (optical propagation through ~cm of layers). Capacity: image classification at THz; scales with aperture area and layer count.
Examples
All-optical machine learning using diffractive deep neural networks (Science, 2018)
Lin et al. introduce D²NN: five 3D-printed passive diffraction layers classify MNIST digits at 91.75% accuracy with zero active power