This article details a fascinating approach by researchers at Technische Universität Berlin to address the skyrocketing energy costs of training massive neural networks like GPT-3. Instead of the traditional method of adding more neurons to a network (scaling in space), the team designed a “Folded-in-time Deep Neural Network” (Fit-DNN) that uses a single neuron.
The concept relies on feedback-modulated delay loops. Rather than passing data through layers of physical nodes, the single neuron processes information sequentially, “weighting the same neuron differently over time.” The researchers compare it to “a single guest simulating the conversation at a large dinner table by switching seats rapidly and speaking each part.” By utilizing lasers to create these time-based loops, the system operates near the speed of light, theoretically allowing for a “limitless number” of neuronal connections without the massive hardware footprint.