AI Cameras

I studied and wrote history for six years, the final two at the graduate level. To say that history is written by the winners is a prime example of the Dunning-Kruger effect. It is a popular and repeated canard but has no basis in fact. A Marxist historical interpretation of the Luddites would be quite anti-owner and pro-worker, just as an example. A more than cursory reading of the Luddites or any other historical event will reveal a lot of opinions. While studying history I saw no evidence of an conspiracies of thought.. Rather the opposite, seminars could get quite heated in interpretations. The purpose of education is to encourage thought. Despite whatever impressions you may have it is not a conveyor belt of pre-assigned opinion.

Broadly speaking AI is any intelligence that is artificial, and that could even include the vacuum advance that was once on automobile distributors and can certainly be applied to autofocus, auto ISO and so on. This seems to be degenerating into semantic quibbling. IMNSHO the increasing introduction of "automatic" functions to camera firmware/software increases the chances of a "keeper" rather than diminishes it. That is the point, isn't it? Others may enjoy the old analog(ue) exercise of film, light meters, manual focus and setting lens opening and shutter speed. I do not. I would not relish manually advancing the spark in my car either. It is 2025, almost 2026, let's reap the benefits of what science has brought us. I do understand that some folks like the analog exercise but doing things the hard way for whatever reason does not appeal to me. It's like playing golf with a croquet mallet or making love standing up in a hammock.
My car has neither spark plugs nor air induction system😉
 
I call your attention to a parallel, TV. Some folks are appalled at what is broadcast. TV's come with channel selectors and in the case of extreme disgust, the off switch. None of us are obliged to watch what we do not want to watch. Likewise we are not obliged to use all the "auto" functions on cameras. Turn them off if you don't want to use them. Any camera I have has that option.
No, not my point. Your "AI" auto features being referred to in cameras are machine learning. Not LLM/gen AI. We need to keep our terms defined, or this conversation gets confusing fast.
 
No, not my point. Your "AI" auto features being referred to in cameras are machine learning. Not LLM/gen AI. We need to keep our terms defined, or this conversation gets confusing fast.

Very good point. We are fast approaching a "Burn the witch" hysteria on the subject. AI in the pure sense could be, is a great tool. But like any tool it can be misunderstood and misused. Like a knife.
 
Very good point. We are fast approaching a "Burn the witch" hysteria on the subject. AI in the pure sense could be, is a great tool. But like any tool it can be misunderstood and misused. Like a knife.
It's on purpose; AI companies know what they're doing. Right now the gen AI/LLM stuff is providing all the flash and none of the real utility and the public is still lapping it up. Soon enough people are going to get tired of posting long AI written rants on Facebook or LinkedIn, and (among many other poor bets by corporations) Microsoft's $80-something billion commitment to Copilot is going to bite them in the backside.
 
It's on purpose; AI companies know what they're doing. Right now the gen AI/LLM stuff is providing all the flash and none of the real utility and the public is still lapping it up. Soon enough people are going to get tired of posting long AI written rants on Facebook or LinkedIn, and (among many other poor bets by corporations) Microsoft's $80-something billion commitment to Copilot is going to bite them in the backside.

I do not follow any of this and do not directly use any of it. However I think it is inescapable and offers some real value.
 
I do not follow any of this and do not directly use any of it. However I think it is inescapable and offers some real value.

It really isn't inescapable - nor does it have any value. But there's definitely a reason you'd think that.

As @agentlossing and myself have both pointed out, there's a lot of marketing going on here - the big tech guys like Sam Altman and Sundar Pichai are playing very specific games to make it seem like GenAI is the future to boost their share prices. One of them is conflating GenAI with what used to be called "machine learning" - or even just automation - which a lot of companies have foolishly bought in on (rebranding something their software already did as "AI" allows them to ride the hype bubble). Another one is pretending GenAI has a path to the "real" AI - the AI we think of because of decades of science fiction (when, as previously mentioned, GenAI is a dead end).

But the most pernicious of these claims/ideas is that you can't escape GenAI. You can. You don't have to use it - and it's probably better if you don't (study after study has shown very little - if any! - benefits to using AI, either at the business or personal level; for instance, one of the most recent in-depth studies has shown AI assistants misrepresent news content 45% of the time, with the worst concerningly being Gemini, Google's AI - which is now forced upon everyone who uses Google as a search engine). But tech companies continue to shove GenAI into everything both to claim a large user base (as just mentioned, if you use Google, you're using Gemini by default - whether you want to or not! - which artificially bumps up their numbers) and to make it seem inevitable - but it's no more inevitable or inescapable than crypto or NFTs were.

This podcast interview on the subject is well worth a listen: Generative AI is Not Inevitable w/ Emily M. Bender and Alex Hanna - Tech Won’t Save Us
 
It's not truly learning and expanding, it's just spitting out a product. Once we all get a little more familiar with its defining features, it'll seem like the hollow, lifeless drivel that it is, because it's not going anywhere fresh or exciting. This is how the bubble pops, because this is just about as good as this tech is going to get.
Flippant because even in it's present state, AI, in the hands of let's say scientists, medical practitioners, and generals -who have been trained to use it, is an incredibly potent technology that boosts their ability to do both good and bad a thousand fold.
 
It really isn't inescapable - nor does it have any value. But there's definitely a reason you'd think that.

As @agentlossing and myself have both pointed out, there's a lot of marketing going on here - the big tech guys like Sam Altman and Sundar Pichai are playing very specific games to make it seem like GenAI is the future to boost their share prices. One of them is conflating GenAI with what used to be called "machine learning" - or even just automation - which a lot of companies have foolishly bought in on (rebranding something their software already did as "AI" allows them to ride the hype bubble). Another one is pretending GenAI has a path to the "real" AI - the AI we think of because of decades of science fiction (when, as previously mentioned, GenAI is a dead end).

But the most pernicious of these claims/ideas is that you can't escape GenAI. You can. You don't have to use it - and it's probably better if you don't (study after study has shown very little - if any! - benefits to using AI, either at the business or personal level; for instance, one of the most recent in-depth studies has shown AI assistants misrepresent news content 45% of the time, with the worst concerningly being Gemini, Google's AI - which is now forced upon everyone who uses Google as a search engine). But tech companies continue to shove GenAI into everything both to claim a large user base (as just mentioned, if you use Google, you're using Gemini by default - whether you want to or not! - which artificially bumps up their numbers) and to make it seem inevitable - but it's no more inevitable or inescapable than crypto or NFTs were.

This podcast interview on the subject is well worth a listen: Generative AI is Not Inevitable w/ Emily M. Bender and Alex Hanna - Tech Won’t Save Us

I am gad that you are keeping an open mind on this.
 
Flippant because even in it's present state, AI, in the hands of let's say scientists, medical practitioners, and generals -who have been trained to use it, is an incredibly potent technology that boosts their ability to do both good and bad a thousand fold.
There is recent research showing that, to date, GenAI has not delivered the results predicted, expected and hoped for in the medical field. However, human independent weapons may have killed people.

I’m out now.
 
I am gad that you are keeping an open mind on this.

I'll be honest: saying "I do not follow any of this" and then responding to someone who does by saying "I am gad [sic] that you are keeping an open mind on this" doesn't sit well with me. And to roughly quote Tim Minchin, "the problem with an open mind is that your brains fall out."

Some of the things I do for work necessitate me keeping on top of developments in this field. I read and listen to a lot of interviews, studies, and analysis on the subject. Once you get away from the hype machine - people who have a vested interest in making GenAI seem like the best thing since sliced bread - it is bad. Very, very bad. On so many levels.

Have a look at the stuff cited in that last comment as a starter. Another good source would be this podcast interview with the AI researcher who was fired from Google for raising concerns about the nature of GenAI. (And there's an dissection of the paper that got her fired and the context surrounding it here.)
 
As a whole, AI can be a powerful tool for some. As an example, for a physician diagnosing illness or predicting the outcome of treatment. For creating photography, uh-uh. Not good. Zero, zed, nope. It smells of dishonesty. Which, after all, we are seeing on the political front right now.



............................
 
IMO, much of the buzz surrounding generative AI amounts to marketing. Much doesn't actually exist yet, but they hope that if enough people buy into it -- maybe someday?

At the moment, we've got great Engine Control Units (ECUs) for automobiles, intelligent photo-retouching software, cameras which can lock focus on an individual in a crowd of other people, natural-language front ends for search engines, and chatbots which seem empathetic. But I wonder if we're still just as far away from realizing a Great and Powerful Oz as we were in the 1950s. The challenge in using AI in creative endeavors is that creativity isn't necessarily logical, but mimicry is pretty straightforward.

Speaking of pipe-dreams, remember the Metaverse? A vast amount of money was poured into that bottomless pit.


And then, there was this thing. When it failed to gain traction with home users, they flogged it to business, industry and military users.
 

Thread viewers

Back
Top Bottom