The Problem with the Matrix Theory of AI-Assisted Human Learning

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems “will do more to distract and entertain than to focus.” (Since they tend to “hallucinate” inaccuracies, and may first be relegated to areas “where reliability isn’t a concern” like videogames, song mash-ups, children’s shows, and “bespoke” images.)

“The problem is that those are the areas that matter most for economic growth…”

One lesson of the digital age is that more is not always better… The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?
You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the “boring apocalypse” scenario for A.I., in which “we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We’re just inflating and compressing content generated by A.I.”

But there’s another worry: that the increased efficiency “would come at the cost of new ideas and deeper insights.”

Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I’ve come to think of as the Matrix theory of knowledge.

Many of us wish we could use the little jack from “The Matrix” to download the knowledge of a book (or, to use the movie’s example, a kung fu master) into our heads, and then we’d have it, instantly.

But that misses much of what’s really happening when we spend nine hours reading a biography. It’s the time inside that book spent drawing connections to what we know … that matters…

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real… To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don’t overwhelm and distract and diminish us.

We failed that test with the internet. Let’s not fail it with A.I.

Read more of this story at Slashdot.



Source: Slashdot – The Problem with the Matrix Theory of AI-Assisted Human Learning