In a pursuit to use chip memory more efficiently, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed a cache system dubbed “Jenga” that creates new cache structures on the spot to optimize for a specific app. Unlike typical memory caches, this implementation can actually change hierarchy and determine how and where to store data so lag is reduced. On a simulation of a chip with 36 cores, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent.
Unlike today’s cache management systems, Jenga distinguishes between the physical locations of the separate memory banks that make up the shared cache. For each core, Jenga knows how long it would take to retrieve information from any on-chip memory bank, a measure known as “latency.” Jenga builds on an earlier system from Sanchez’s group, called Jigsaw, which also allocated cache access on the fly. But Jigsaw didn’t build cache hierarchies, which makes the allocation problem much more complex.
Discussion
Source: [H]ardOCP – MIT Develops “Jenga” Memory Cache System