Thermodynamic computing for AI applications
Patrick Coles, Chief Scientist at Normal Computing
Computing systems for AI workloads now dominate enterprise budgets and have geopolitical ramifications. Many modern AI algorithms are physics-inspired, and notably they are often inspired by classical thermodynamics. Yet currently we run these algorithms on physics-agnostic hardware, namely GPUs. One may wonder whether the inherent thermodynamic processes in silicon chips could be leveraged for computations that accelerate AI workloads, due to the natural match between the physics of the hardware and mathematics of AI algorithms. In this talk, I will discuss our team’s work to turn this vague idea of a “thermodynamic computer” into a precise architecture. What are the basic building blocks of thermodynamic computers? What mathematical primitives can they accelerate? What kind of speedups do we expect? How do we mitigate errors? I will address these questions in my talk, I will present some proof-of-principle demonstrations from our first prototype circuit, and I will discuss some of our plans to scale up our hardware in silicon.