AI Cameras

It's ironic and frightening at the same time to know that we've created AI based on how our own brains and minds work.
[...]
If we want AI to generate some other results, we need to feed it with different data, and other algorithms to generate other connections.
Sadly, this is fundamentally wrong.

I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.

Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).

GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.
 
Sadly, this is fundamentally wrong.

I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.

Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).

GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.
Or indeed, to tell my 9 stone wife that curing type 2 diabetes (if she had it) would require her to lose 23 stone…
 
Sadly, this is fundamentally wrong.

I spend way too much time reading and listening to tech-critical journalism (Cory Doctorow, Paris Marx, Ed Zitron, et. al.) and someone recently did a long-form dissection of this myth. And it is a myth, and it's one that's been cultivated for a very specific reason.

Altman and the like really, really want people (by which I mean investors) to believe they're en route to General Artificial Intelligence. They're not. Generative AI is a dead-end - it's nothing more than fancy auto-complete. All it can do is regurgitate things which have been put into it. The fundamental difference between the human brain (and the science fiction concept of General Artificial Intelligence) and the current crop of things presented as "AI" is that the human brain can make new connections and extrapolate outwards from the data that goes in. Generative AI cannot. It can only repeat what it "knows" - what it's been trained on - and if presented with a gap in that knowledge, it will "hallucinate" (which is marketing-speak from the tech industry for "making sh*t up"). And this is widely acknowledged by the people making these plagiarism machines as an unfixable problem (OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws).

GenAI isn't going to look at an apple falling and figure out how gravity works. It's not going to pull burdock seeds off its coat and invent velcro. It's just going to keep scraping Reddit sh*tposts and use that as a reference to tell people to use glue to keep their cheese on a pizza.
Preach!

That's the same conclusion I have come to (unscientifically on my part). Generative AI is going to continue looking pretty much like it does now, with the same problems, because it's made to capture an audience (us, corporations, investors) at a specific moment in time (now). It's not truly learning and expanding, it's just spitting out a product. Once we all get a little more familiar with its defining features, it'll seem like the hollow, lifeless drivel that it is, because it's not going anywhere fresh or exciting. This is how the bubble pops, because this is just about as good as this tech is going to get.
 
I think things start to make sense when you look at how insanely incestuous and corrupt the financials are behind the bubble. This video does a pretty good job at showing (or at least illustrating) the cashflow:


And this blog post (by one of the co-founders/original team members of Nvidia, no less!) shows how insane/flawed the valuations are: Depreciation

The whole thing is basically one big pump-and-dump - overhype the product and overhype the goals/future development, ramp up the share price, make your money, and get out before it collapses. The numbers involved are already bigger than the Dot Com bubble of the 90s, and may even be bigger than the housing market crash in 2008.
 
I've come to the conclusion that all that "Generative AI" is, is painting with pixels instead of the real messy stuff. One puts together an image by layers the same way you would construct a painting. It's not unlike the realist movement in art that drew a lot of criticism when it first appeared on the scene. AI images are still art, just not in the way one considers an honest attempt to portray something in a real way. A lot of it is used for illustrative purposes by so-called creatives to flesh out the drivel stories they post to get views and likes on YouTube. As soon as I see an AI generated image in my feed that shows how ignorant the creator is about the subject matter they are foisting on the masses, I have no need to watch the video and listen to the same false voice that cannot distinguish IN from in or inches, so I tell the algorithm I do not wish to see anything from that creator in my feed for the foreseeable future. But it's not just the AI image heavy stuff that irks me. Too many times, I see images that are actual photographs, but they either are of the wrong subject or not even relevant to the story line. So, folks are using the technology to add a slide show to the uninformed slop they are pushing by looking up real photos to their ignorant narrative, which likely is also AI generated.

Someday, once the consumers decide they are sick and tired of seeing AI content, it will mostly go away. It will have a use in any fantasy heavy industry such as movies, cartoons, and, ahem, adult entertainment, but beyond that I don't see a real need for it.

PF
 
I've come to the conclusion that all that "Generative AI" is, is painting with pixels instead of the real messy stuff. One puts together an image by layers the same way you would construct a painting. It's not unlike the realist movement in art that drew a lot of criticism when it first appeared on the scene. AI images are still art
Only if by the painting analogy you mean a painter dictating to some kind of servant who is wielding the brush but who has no agency in how the painting is supposed to look. That's a scenario that could have already happened anytime from the early Middle Ages up to now, but it hasn't ever happened in a way that has established it as a legitimate art form, because it ain't art. Neither is AI.
 
Only if by the painting analogy you mean a painter dictating to some kind of servant who is wielding the brush but who has no agency in how the painting is supposed to look. That's a scenario that could have already happened anytime from the early Middle Ages up to now, but it hasn't ever happened in a way that has established it as a legitimate art form, because it ain't art. Neither is AI.
Some of the great Renascence artists had interns to make copies of their popular paintings for sale, some of whom went on to develop their own style and surpass their masters. Generative AI is art, just a very pedestrian sort that will fall out of favor quickly, I hope. I'm tired of seeing perfectly balanced faces and wonderfully glowing light on everything.

PF
 
Some of the great Renascence artists had interns to make copies of their popular paintings for sale, some of whom went on to develop their own style and surpass their masters. Generative AI is art, just a very pedestrian sort that will fall out of favor quickly, I hope.
This is a bad analogy; in that case, the interns were basically doing the job of a photocopier, but even if you want to compare them with Generative AI, those interns were humans. That's important; I don't think you can call something "art" if it's had absolutely minimal human interaction and involvement in its creation. Art requires both intent and some sort of "soul", however you want to define that. I don't see any soul in GenAI-produced slop.
 
When I was in college in the 60s (yeah, I still recall some of it) there was this guy who used to come into the student center and arrange chairs in stacks on the floor. He then would declare it "Art". I was ignorant of Duchamp and the concept of conceptual art at the time so I just thought this character was amusing but harmless. With AI (Artificial Ignorance) I no longer consider it amusing or harmless.






.......................................
 
Last edited:
Whether by happenstance or design, there's been speculation that Tim Cook's Apple could potentially be a winner in the AI game, despite their "failure" to build much AI presence to date. But using AI simply to generate imagery or write essays doesn't seem like the sort of high-value application that can sustain such large expenditures.

But OTOH, unglamorous applications such as Adobe Lightroom's Remove tool are worlds better than the older Clone tool, and it's not hard for me to imagine other AI tools for erasing damage from fungus, or restoring colors where original dyes have discolored, or may be almost entirely lacking. Because AI as we know it today doesn't seem to be designed for originality, but it's very good at pattern-recognition.
 
AI is not a "one size fits all" deal. There are various levels of it. AWB, autofocus, IBIS, and so on. We are using AI already. If you drive a car made in the last ten years you are dealing with AI, it controls the engine. The Luddites attacked modern weaving machines. You can see what came of that.

As for increasing dependency on others, did you weave and sew your clothes, build your home, construct your computer or camera, analog(ue) or digital, or cobble your shoes? Did you take out your own appendix? It seems that modern societies require dependence on others. OTOH, you can live in a cave by yourself and make all you use. It is still an option.

And while a camera or camera-like device may be able to do all sorts of AI magic for the shooter it still cannot pick a good shot to take. It is still the hand that holds the brush, not the brush, the hand that holds the camera, not the camera that makes the difference. The rest is trivial.
 
Last edited:
The Luddites attacked modern weaving machines. You can see what came of that.
The Luddites have an unfair rep; as is always the case, history is written by "the winners", which in this case is the mill owners and wealthy.

Despite what people are usually taught, the Luddites were actually protesting against the loss of jobs and reduction in pay. The argument was pro-workers' rights, not anti-machinery.

As per Wikipedia:

Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes.

So while there are some direct parallels with critiques of GenAI and the discourse around the implementation of it today, to argue they were just against "progress" is a misrepresentation of what the Luddites were fighting against (which conveniently obscures how awful mill owners were to their employees at the time).

Growing up in the region in England where both Arkwright's mill and this movement was based, this history is pretty close to my heart.

Anyway. I can't remember if I've already said it in this thread, but describing IBIS, autofocus, etc. as "AI" is a misnomer, and any companies slapping the AI terminology on them will no doubt quietly drop that term when the current hype around GenAI dies. Automation isn't the issue here (even though, as discussed before, I largely find autofocus to be an unreliable annoyance) - conflating image generation that is not only entirely based on the theft of other people's work but hugely (and unnecessarily) resource intensive with basic automation technologies is a bit of convenient hand-waving none of us should be allowing tech companies to get away with.
 
In
The Luddites have an unfair rep; as is always the case, history is written by "the winners", which in this case is the mill owners and wealthy.

Despite what people are usually taught, the Luddites were actually protesting against the loss of jobs and reduction in pay. The argument was pro-workers' rights, not anti-machinery.

As per Wikipedia:



So while there are some direct parallels with critiques of GenAI and the discourse around the implementation of it today, to argue they were just against "progress" is a misrepresentation of what the Luddites were fighting against (which conveniently obscures how awful mill owners were to their employees at the time).

Growing up in the region in England where both Arkwright's mill and this movement was based, this history is pretty close to my heart.

Anyway. I can't remember if I've already said it in this thread, but describing IBIS, autofocus, etc. as "AI" is a misnomer, and any companies slapping the AI terminology on them will no doubt quietly drop that term when the current hype around GenAI dies. Automation isn't the issue here (even though, as discussed before, I largely find autofocus to be an unreliable annoyance) - conflating image generation that is not only entirely based on the theft of other people's work but hugely (and unnecessarily) resource intensive with basic automation technologies is a bit of convenient hand-waving none of us should be allowing tech companies to get away with.
My thoughts on the Luddites exactly. After they were replaced and the wages of the few remaining workers reduced, they were protesting their treatment. Capital continues in the same vein today of course. Our ‘system’ has become ‘too efficient’ - the only way to keep people employed is to make stuff to be thrown away in a destructive cycle of endless consumption. The owners of capital, having made people redundant, prefer to retain the value released rather than feed the now unemployed.
 
AI is not a "one size fits all" deal. There are various levels of it. AWB, autofocus, IBIS, and so on. We are using AI already. If you drive a car made in the last ten years you are dealing with AI, it controls the engine. The Luddites attacked modern weaving machines. You can see what came of that.

As for increasing dependency on others, did you weave and sew your clothes, build your home, construct your computer or camera, analog(ue) or digital, or cobble your shoes? Did you take out your own appendix? It seems that modern societies require dependence on others. OTOH, you can live in a cave by yourself and make all you use. It is still an option.

And while a camera or camera-like device may be able to do all sorts of AI magic for the shooter it still cannot pick a good shot to take. It is still the hand that holds the brush, not the brush, the hand that holds the camera, not the camera that makes the difference. The rest is trivial.
Machine learning isn't what most of us are bothered by. LLMs and generative AI, and the way they are being implemented to replace creativity with mimicry, are an affront to the human experience.
 
Machine learning isn't what most of us are bothered by. LLMs and generative AI, and the way they are being implemented to replace creativity with mimicry, are an affront to the human experience.
Exactly - I’m using ML at work to help make data mapping more tractable. But we’re under no illusion that it’s intelligent.
 
The Luddites have an unfair rep; as is always the case, history is written by "the winners", which in this case is the mill owners and wealthy.

Despite what people are usually taught, the Luddites were actually protesting against the loss of jobs and reduction in pay. The argument was pro-workers' rights, not anti-machinery.

As per Wikipedia:



So while there are some direct parallels with critiques of GenAI and the discourse around the implementation of it today, to argue they were just against "progress" is a misrepresentation of what the Luddites were fighting against (which conveniently obscures how awful mill owners were to their employees at the time).

Growing up in the region in England where both Arkwright's mill and this movement was based, this history is pretty close to my heart.

Anyway. I can't remember if I've already said it in this thread, but describing IBIS, autofocus, etc. as "AI" is a misnomer, and any companies slapping the AI terminology on them will no doubt quietly drop that term when the current hype around GenAI dies. Automation isn't the issue here (even though, as discussed before, I largely find autofocus to be an unreliable annoyance) - conflating image generation that is not only entirely based on the theft of other people's work but hugely (and unnecessarily) resource intensive with basic automation technologies is a bit of convenient hand-waving none of us should be allowing tech companies to get away with.

I studied and wrote history for six years, the final two at the graduate level. To say that history is written by the winners is a prime example of the Dunning-Kruger effect. It is a popular and repeated canard but has no basis in fact. A Marxist historical interpretation of the Luddites would be quite anti-owner and pro-worker, just as an example. A more than cursory reading of the Luddites or any other historical event will reveal a lot of opinions. While studying history I saw no evidence of an conspiracies of thought.. Rather the opposite, seminars could get quite heated in interpretations. The purpose of education is to encourage thought. Despite whatever impressions you may have it is not a conveyor belt of pre-assigned opinion.

Broadly speaking AI is any intelligence that is artificial, and that could even include the vacuum advance that was once on automobile distributors and can certainly be applied to autofocus, auto ISO and so on. This seems to be degenerating into semantic quibbling. IMNSHO the increasing introduction of "automatic" functions to camera firmware/software increases the chances of a "keeper" rather than diminishes it. That is the point, isn't it? Others may enjoy the old analog(ue) exercise of film, light meters, manual focus and setting lens opening and shutter speed. I do not. I would not relish manually advancing the spark in my car either. It is 2025, almost 2026, let's reap the benefits of what science has brought us. I do understand that some folks like the analog exercise but doing things the hard way for whatever reason does not appeal to me. It's like playing golf with a croquet mallet or making love standing up in a hammock.
 
Machine learning isn't what most of us are bothered by. LLMs and generative AI, and the way they are being implemented to replace creativity with mimicry, are an affront to the human experience.

I call your attention to a parallel, TV. Some folks are appalled at what is broadcast. TV's come with channel selectors and in the case of extreme disgust, the off switch. None of us are obliged to watch what we do not want to watch. Likewise we are not obliged to use all the "auto" functions on cameras. Turn them off if you don't want to use them. Any camera I have has that option.
 
Back
Top Bottom