AI Cameras

It's ironic and frightening at the same time to know that we've created AI based on how our own brains and minds work.
[...]
If we want AI to generate some other results, we need to feed it with different data, and other algorithms to generate other connections.
Sadly, this is fundamentally wrong.

I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.

Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).

GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.
 
Sadly, this is fundamentally wrong.

I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.

Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).

GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.
Or indeed, to tell my 9 stone wife that curing type 2 diabetes (if she had it) would require her to lose 23 stone…
 
Sadly, this is fundamentally wrong.

I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.

Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).

GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.
Preach!

That's the same conclusion I have come to (unscientifically on my part). Generative AI is going to continue looking pretty much like it does now, with the same problems, because it's made to capture an audience (us, corporations, investors) at a specific moment in time (now). It's not truly learning and expanding, it's just spitting out a product. Once we all get a little more familiar with its defining features, it'll seem like the hollow, lifeless drivel that it is, because it's not going anywhere fresh or exciting. This is how the bubble pops, because this is just about as good as this tech is going to get.
 
I think things start to make sense when you look at how insanely incestuous and corrupt the financials are behind the bubble. This video does a pretty good job at showing (or at least illustrating) the cashflow:


And this blog post (by one of the co-founders/original team members of Nvidia, no less!) shows how insane/flawed the valuations are: Depreciation

The whole thing is basically one big pump-and-dump - overhype the product and overhype the goals/future development, ramp up the share price, make your money, and get out before it collapses. The numbers involved are already bigger than the Dot Com bubble of the 90s, and may even be bigger than the housing market crash in 2008.
 
I've come to the conclusion that all that "Generative AI" is, is painting with pixels instead of the real messy stuff. One puts together an image by layers the same way you would construct a painting. It's not unlike the realist movement in art that drew a lot of criticism when it first appeared on the scene. AI images are still art, just not in the way one considers an honest attempt to portray something in a real way. A lot of it is used for illustrative purposes by so-called creatives to flesh out the drivel stories they post to get views and likes on YouTube. As soon as I see an AI generated image in my feed that shows how ignorant the creator is about the subject matter they are foisting on the masses, I have no need to watch the video and listen to the same false voice that cannot distinguish IN from in or inches, so I tell the algorithm I do not wish to see anything from that creator in my feed for the foreseeable future. But it's not just the AI image heavy stuff that irks me. Too many times, I see images that are actual photographs, but they either are of the wrong subject or not even relevant to the story line. So, folks are using the technology to add a slide show to the uninformed slop they are pushing by looking up real photos to their ignorant narrative, which likely is also AI generated.

Someday, once the consumers decide they are sick and tired of seeing AI content, it will mostly go away. It will have a use in any fantasy heavy industry such as movies, cartoons, and, ahem, adult entertainment, but beyond that I don't see a real need for it.

PF
 
I've come to the conclusion that all that "Generative AI" is, is painting with pixels instead of the real messy stuff. One puts together an image by layers the same way you would construct a painting. It's not unlike the realist movement in art that drew a lot of criticism when it first appeared on the scene. AI images are still art
Only if by the painting analogy you mean a painter dictating to some kind of servant who is wielding the brush but who has no agency in how the painting is supposed to look. That's a scenario that could have already happened anytime from the early Middle Ages up to now, but it hasn't ever happened in a way that has established it as a legitimate art form, because it ain't art. Neither is AI.
 

Thread viewers

Back
Top Bottom