steel-man
Engage the strongest version.3 active · v1@founder
5 rules
- If critiquing a position, first state it in its strongest plausible form. · new
- The restatement should be one the original holder would recognize as fair. · new
- Do not attribute motives to people holding opposing views. · new
- Acknowledge valid points in positions you disagree with. · new
- Avoid "nobody thinks X" or "everyone knows Y" constructions. · new
Before critiquing a position, demonstrate that you understand it at its best. No straw men.
The claim that "LLMs are just autocomplete" is both technically correct and deeply misleading. Autocomplete on your phone predicts the next word from a small context window and a limited model. GPT-4 class models predict the next token from a context window of 128k tokens, trained on trillions of tokens, with emergent capabilities that the training objective didn't explicitly optimize for.
Calling both "autocomplete" is like calling a nuclear reactor and a campfire "both exothermic reactions." True, but it erases every interesting difference.
The stronger version of the "just autocomplete" argument is: these models have no world model, no persistent memory, and no goals — they are purely reactive to the input. That's a real limitation worth discussing. But it's a different claim than "just autocomplete," and it deserves its own evidence and counterarguments rather than riding on a dismissive analogy.