— ALISON GOPNIKlarge models are a social and cultural technology through which human beings can solve problems and coordinate in new and sometimes useful ways. We explain large models as “‘lossy JPEGs’ of the data corpora on which they have been trained,” statistical machines that “sample and generate text and images.” The implication is that they will never be intelligent in the ways that humans, or even bumble-bees are intelligent, but that they may reflect, mediate, compress and remix human intelligence in useful ways. If they become smarter than individual humans it will be in ways that are roughly analogous to how markets are sometimes ‘smarter.’ As Herbert Simon argues in The Sciences of the Artificial, artificial systems can create composites of collective human intelligence, allow for new means of coordination and so on.
The implication of this is that large models will not be a substitute for human problem solving, but an extension of existing collective capabilities, which will also generate their own problems and conflicts, much as markets, bureaucracies, democracies have. They are not an exit door through which we can escape the human condition, delegating decisions to independent Minds-to-be that are wiser than us. Instead, they a collective extension of our own minds, founded on the cultural substrates through which we communicate and coordinate
Replicated under Fair Use from Should AGI-preppers embrace DOGE? by ALISON GOPNIK.