Coldkennels
Barnack-toting Brit.
Sadly, this is fundamentally wrong.It's ironic and frightening at the same time to know that we've created AI based on how our own brains and minds work.
[...]
If we want AI to generate some other results, we need to feed it with different data, and other algorithms to generate other connections.
I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.
Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).
GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.