Some AI Systems May Be Impossible to Compute

Research suggests fundamental theoretical limits to deep learning, drawing parallels to Gödel and Turing: just because a stable neural network exists mathematically, doesn't mean an algorithm can compute it.

This IEEE Spectrum article discusses a study published in PNAS by researchers from the University of Cambridge and the University of Oslo. The authors identify a paradox at the heart of deep learning: while a stable and accurate neural network might mathematically exist for a given problem, there may be no algorithm capable of actually computing it with current digital tools.

The researchers draw parallels to the work of Kurt Gödel and Alan Turing regarding the fundamental limits of mathematics and computation. They illustrate the problem with a “cake analogy”: a recipe for the perfect cake may exist, but if the physical tools (the mixer) are insufficient, the cake cannot be made—or worse, you might bake a completely different cake and not realize it until you taste it.

Practically, this manifests as a trade-off between accuracy and stability. In safety-critical fields like medical imaging or autonomous driving, high accuracy is often achieved at the cost of stability, making systems vulnerable to adversarial attacks (where a single pixel change causes a misdiagnosis). The study suggests that rather than blind faith in deep learning, we need a classification theory to understand which neural networks are actually computable.

Link