And remember that these AI projects are in a state of evolution. I am not saying that they are perfect. I am saying that they are not yet finished. We aren't driving Model T's anymore nor are we flying what the Wright Brothers first flew. You get it?
I take it you didn't listen to the podcast I linked to, then.
There are fundamental flaws and issues in the very concept of "AI" that seem near-impossible to fix. For instance, during the whole "glue on pizza" debacle, people involved
in "AI" were admitting that hallucinations are a part of "AI" itself. For instance, in this Fortune article -
Tech experts are starting to doubt that ChatGPT and A.I. 'hallucinations' will ever go away: 'This isn’t fixable' - Daniela Amodei, co-founder and president of Anthropic, said “They’re really just sort of designed to predict the next word [...] and so there will be some rate at which the model does that inaccurately.”
And further down in the same article:
“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”
Despite the way we've been encouraged to think about these models - weirdly anthropomorphising them in the process - they're not really able to "think" in any capacity, which is why I hate the term "AI". There's no intelligence there - they're just prediction engines. And yes, garbage in, garbage out. And this is important, because they're rapidly running out of good content to train the models with. Not only is there physically not enough human-created material for these machines to syphon up, but there's now so much "AI" slop across the internet that they're running into it and absorbing it organically. This causes problems (
When AI Is Trained on AI-Generated Data, Strange Things Start to Happen):
...as it turns out, when you feed synthetic content back to a generative AI model, strange things start to happen. Think of it like data inbreeding, leading to increasingly mangled, bland, and all-around bad outputs.
On top of all of this, companies are going all-in on a flawed idea that is energy hungry (
IDC Report Reveals AI-Driven Growth in Datacenter Energy Consumption, Predicts Surge in Datacenter Facility Spending Amid Rising Electricity Costs). One estimate says the "AI" industry could use as much energy as the Netherlands within a year or two (
Warning AI industry could use as much energy as the Netherlands), and one ChatGPT query uses as much energy as burning a light bulb for 20 minutes (
https://www.npr.org/2024/07/12/g-s1...crosoft-a-major-contributor-to-climate-change) - considerably more than one basic search on a non-"AI"-powered search engine. This is neither responsible
nor justified; in a planet that is literally
burning, we're wasting more energy for even worse answers to our flippant questions.
And I'm not the only one that thinks that. At the end of that first article I linked to:
But even [Sam] Altman, [OpenAI's CEO,] as he markets the products for a variety of uses, doesn’t count on the models to be truthful when he’s looking for information for himself.
“I probably trust the answers that come out of ChatGPT the least of anybody on Earth,” Altman told the crowd at Bagler’s university, to laughter.
This whole "AI" thing is a huge scam; it's just something tech bros pivoted to when their last ridiculous money-making scam, the NFT, fizzled out. And the sooner this one also fizzles out, the better.