JohnGellings
Well-known
Time will tell, but AI is being used for a lot more purposes than NFTs usefulness. I do not think you can compare them unless you think AI is a fad. I highly doubt that it is as a whole.I'm a "I'll believe it when I see it guy". There were thousands of people telling me that I was an idiot, a complete fool, for not investing in NFTs in 2021. Proclaiming, loudly, that in a couple of years, I would see how important the new technology is, and I was missing out for not seeing its potential.
The continuing theme among all the hype was that "technology = good, NFTs = technology, so NFTs = good". Anyway, if you haven't been paying attention, the NFT market is about 95% smaller this year than it was in 2022. Millions of dollars have been lost. Technology isn't always good. And potential isn't always realized. There was a time when people, quite literally, thought blowing smoke up people's asses was a major leap forward in medical science. Well, the tech industry has been heavy into the digital equivalent of such activities for the past few years.
AI generation is already in a stage of diminishing returns. It's why it's being marketed so heavily. Everybody who knows, knows that this is about as good as it's going to get, sinking more money into it isn't going to make it appreciably better, so it has to be launched now. Or it's not going to make money. What they want us to believe is that we're going to continue to see the massive improvements in results that they were making two years ago, when everybody suddenly was in a race to get their products to market (for the reason stated, spending more time and money wasn't going to ensure much better results). Well it's not going to happen. That's plainly obvious when you compare the pace of improvement two years ago, to the pace today. Yes, inevitably, there will be a time when the AI generated stuff is indistinguishable from the real thing, even to experts, such is the case today with CGI effects in movies -but it's not going to happen in the next year or two as promised. It's probably going to take five or ten more years. Which yes, is a lot nearer than it seems, but also gives some time to get the legal issues sorted out - which they really don't want us to do, thus all the "don't resist, just give in and be steamrolled" rhetoric being pushed by the people with vested interests.
Last edited:
bcostin
Well-known
The violation wasn't that the images existed, it was that they were pretending to be something they were not.
Artists have IP rights to images that they create, not images that look like the ones they create. There's obviously nothing wrong with making images that are inspired by Ansel Adams. He spent much of his life teaching people how to do exactly that, after all. Whether the images are created by a person using a camera, acrylic paint, or a computer doesn't really enter into it. They're all just tools.
AI isn't going away. It's been overhyped, like every new technology since iron replaced bronze., but all that aside it is a useful new tool, just like photography once was.
In fact, the long historical debate about whether photography was a legitimate art form or just a cheap way of ripping off real artists sounds an awful lot like some debates about AI today.
Artists have IP rights to images that they create, not images that look like the ones they create. There's obviously nothing wrong with making images that are inspired by Ansel Adams. He spent much of his life teaching people how to do exactly that, after all. Whether the images are created by a person using a camera, acrylic paint, or a computer doesn't really enter into it. They're all just tools.
AI isn't going away. It's been overhyped, like every new technology since iron replaced bronze., but all that aside it is a useful new tool, just like photography once was.
In fact, the long historical debate about whether photography was a legitimate art form or just a cheap way of ripping off real artists sounds an awful lot like some debates about AI today.
Ko.Fe.
Lenses 35/21 Gears 46/20
AI is dirt cheap substitute for attempts to reproduce real exposures.
Dude who fake it with AI is an idiot all round, but viewers...
Dude who fake it with AI is an idiot all round, but viewers...
Mos6502
Well-known
I would not liken these two "debates" at all. Photography was seen as incapable of being art because it was perceived as an exacting science which left no room for creative control. It could only make an exact record of a real scene as it appeared (so it was thought by the Victorians). Meanwhile, AI on the other hand is criticized for being pretty much the opposite, it's almost incapable of handling simple facts and producing accurate outputs. It is quite literally just automating the process of making up bullshit.In fact, the long historical debate about whether photography was a legitimate art form or just a cheap way of ripping off real artists sounds an awful lot like some debates about AI today.
I've got to comment that it's wild to me that when the internet was young and fresh, it was considered irresponsible to look things up on Wikipedia and take them as truth. Now in 2024, everybody is so massively idiotic that they'll believe answers from an "AI" that doesn't give any references, and works in a process that's pretty much completely opaque. How stupid have we become as a society?
Last edited:
boojum
Ignoble Miscreant
I would not liken these two "debates" at all. Photography was seen as incapable of being art because it was perceived as an exacting science which left no room for creative control. It could only make an exact record of a real scene as it appeared (so it was thought by the Victorians). Meanwhile, AI on the other hand is criticized for being pretty much the opposite, it's almost incapable of handling simple facts and producing accurate outputs. It is quite literally just automating the process of making up bullshit.
I've got to comment that it's wild to me that when the internet was young and fresh, it was considered irresponsible to look things up on Wikipedia and take them as truth. Now in 2024, everybody is so massively idiotic that they'll believe answers from an "AI" that doesn't give any references, and works in a process that's pretty much completely opaque. How stupid have we become as a society?
How often do people believe news sources without attributions? Likewise the even worse sources on social media. Before all this internet hoo-ha it was gossip that drove people. This is not new, it is just how the BS is delivered. The pity is that folks seem to have not only abandoned critical thinking but thinking in general. Yes, I am a grouchy old bastard.
Mos6502
Well-known
Yes, but whereas bad actors used to have to write out their stories, or have some skill to photoshop fake images - now the process is automated. Things are going to be hundreds, if not thousands of times worse, because there is a technology that is basically specialized for the task of creating frauds and scams, and no technology specialized in keeping it in check.
wlewisiii
Just another hotel clerk
I remember the first AI winter killing the Lisp Machines.
I look forward to the second AI winter.
I look forward to the second AI winter.
boojum
Ignoble Miscreant
The printing press was a terrible threat, too. Along with radio, newspapers and TV. Somehow we will survive because the grifters will be self-revealed.
Mos6502
Well-known
Apples and oranges.
I think one could also argue that while all of those technologies could be used for evil purposes, they were not devised for evil purposes, whereas AI generation is at its core centered on ripping people off, both in its creation and development, and with its capabilities. Broadcasting and publishing all have a higher cost of entry as well, and law has caught up with most things regarding broadcast or published content. You cannot for example advertise cigarettes that cure cancers (although AI may very well tell you that you can, after all it has already suggested people eat small rocks for good health) or broadcast news that the USS. Eisenhower was destroyed (a claim recently made online and circulated with AI generated images of the ship exploding).
The interesting thing about this is that AI doesn't even need a bad actor behind it to make it give out bogus or dangerous claims, whereas with radio, etc. intent still matters. AI may very well tell people to do something that'd get them killed. We've already seen companies implement AI and then try to wave off responsibility for wrong answers given by AI to customers, clients, et. al. Which is another interesting thing, a large part of the push to adopt AI doesn't seem so much aimed at reducing personnel, as it seems aimed at avoiding responsibility.
I think one could also argue that while all of those technologies could be used for evil purposes, they were not devised for evil purposes, whereas AI generation is at its core centered on ripping people off, both in its creation and development, and with its capabilities. Broadcasting and publishing all have a higher cost of entry as well, and law has caught up with most things regarding broadcast or published content. You cannot for example advertise cigarettes that cure cancers (although AI may very well tell you that you can, after all it has already suggested people eat small rocks for good health) or broadcast news that the USS. Eisenhower was destroyed (a claim recently made online and circulated with AI generated images of the ship exploding).
The interesting thing about this is that AI doesn't even need a bad actor behind it to make it give out bogus or dangerous claims, whereas with radio, etc. intent still matters. AI may very well tell people to do something that'd get them killed. We've already seen companies implement AI and then try to wave off responsibility for wrong answers given by AI to customers, clients, et. al. Which is another interesting thing, a large part of the push to adopt AI doesn't seem so much aimed at reducing personnel, as it seems aimed at avoiding responsibility.
boojum
Ignoble Miscreant
Apples and oranges.
I think one could also argue that while all of those technologies could be used for evil purposes, they were not devised for evil purposes, whereas AI generation is at its core centered on ripping people off, both in its creation and development, and with its capabilities. Broadcasting and publishing all have a higher cost of entry as well, and law has caught up with most things regarding broadcast or published content. You cannot for example advertise cigarettes that cure cancers (although AI may very well tell you that you can, after all it has already suggested people eat small rocks for good health) or broadcast news that the USS. Eisenhower was destroyed (a claim recently made online and circulated with AI generated images of the ship exploding).
The interesting thing about this is that AI doesn't even need a bad actor behind it to make it give out bogus or dangerous claims, whereas with radio, etc. intent still matters. AI may very well tell people to do something that'd get them killed. We've already seen companies implement AI and then try to wave off responsibility for wrong answers given by AI to customers, clients, et. al. Which is another interesting thing, a large part of the push to adopt AI doesn't seem so much aimed at reducing personnel, as it seems aimed at avoiding responsibility.
If you cannot prove this premise as anything other than opinion I will just dismiss it as idle chatter. If you cannot prove it it is artificial intelligence. While I have no proof it would not surprise me that some Luddites raged against the printing press. Automobiles were denounced as vehicles of sin. Progress of any sort is denounced. Whether AI is progress is not in doubt. It is the application that is in doubt, like gunpowder.
Are computers bad? Are lawyers bad? They cause no harm until they have a client. Radio has been used malevolently as has the press and TV. All forms of communication have been used malevolently. Simple speech can be malevolent. AI may make it easier. But just look at the venom and BS and lies spread on computers, especially as they got more "user friendly". Every ordinary idiot can get online and publish anything, and does. I think as the mouth gets more sophisticated so, too, will the ear.
Last edited:
Coldkennels
Barnack-toting Brit.
That really isn't the case. I wish it was, but the reality is that as the tools of manipulation have become more powerful and complex, the resistance to them seems to be decreasing massively.I think as the mouth gets more sophisticated so, too, will the ear.
Look at Nixon vs Trump: the Watergate scandal ruined Nixon's career. Meanwhile, Trump's had more scandals (and criminal cases) than I could even list and he still has a huge following. Joseph Goebbels - the inventor of the term "fake news" - would have killed (probably quite literally) to have the tools and, by extension, the wild devotion that Trump has at his fingertips today.
As for AI: I've been listening to the Better Offline podcast of late. The presenter lays out a lot of the failings of the "AI" hype train; he's somewhat acerbic and angry a lot of the time, but the points are valid. "AI" isn't profitable at all (no company involved in producing these systems has turned a profit yet - but they have burned through an insane amount of energy and resources in the process), "AI" is built entirely on scraping content produced by other people and is completely incapable of producing anything in a vacuum (try searching for your own content on Have I Been Trained? - I can guarantee some of your content is already part of the datasets used by these things), and there literally isn't enough training data in the world to reach the sort of quality/fidelity/"intelligence" these companies claim (one of the current "solutions" being presented is to have the "AI" systems generate "training data" that can be fed back to it - but tests show that this produces complete collapse of the system, with the "AI" then producing absolute nonsense in response to every prompt.)
And the nonsense is already a problem: when the CEO of Google says it has "no solution" for its "AI" providing wildly incorrect information (but continues to force it into every single thing Google produces), you have to question what on earth is going on here.
It feels a lot like the 90s dotcom bubble at the mo. Will anything survive when the bubble bursts? I'm not sure. "AI" feels a lot like NFTs: a terrible "solution" that no one asked for to solve a problem that no one has.
boojum
Ignoble Miscreant
That really isn't the case. I wish it was, but the reality is that as the tools of manipulation have become more powerful and complex, the resistance to them seems to be decreasing massively.
Look at Nixon vs Trump: the Watergate scandal ruined Nixon's career. Meanwhile, Trump's had more scandals (and criminal cases) than I could even list and he still has a huge following. Joseph Goebbels - the inventor of the term "fake news" - would have killed (probably quite literally) to have the tools and, by extension, the wild devotion that Trump has at his fingertips today.
As for AI: I've been listening to the Better Offline podcast of late. The presenter lays out a lot of the failings of the "AI" hype train; he's somewhat acerbic and angry a lot of the time, but the points are valid. "AI" isn't profitable at all (no company involved in producing these systems has turned a profit yet - but they have burned through an insane amount of energy and resources in the process), "AI" is built entirely on scraping content produced by other people and is completely incapable of producing anything in a vacuum (try searching for your own content on Have I Been Trained? - I can guarantee some of your content is already part of the datasets used by these things), and there literally isn't enough training data in the world to reach the sort of quality/fidelity/"intelligence" these companies claim (one of the current "solutions" being presented is to have the "AI" systems generate "training data" that can be fed back to it - but tests show that this produces complete collapse of the system, with the "AI" then producing absolute nonsense in response to every prompt.)
And the nonsense is already a problem: when the CEO of Google says it has "no solution" for its "AI" providing wildly incorrect information (but continues to force it into every single thing Google produces), you have to question what on earth is going on here.
It feels a lot like the 90s dotcom bubble at the mo. Will anything survive when the bubble bursts? I'm not sure. "AI" feels a lot like NFTs: a terrible "solution" that no one asked for to solve a problem that no one has.
Much of what you say is true and I agree with. However you can easily find as many people who have good reasons why AI is just great. And this is just the edge. There will be more going into the pot before this story is over. It is like a bumptious and clever child who has not yet learned manners. And your reference to Goebbels is important because it illustrates that this is a well-traveled road. The uneducated, the low-info voter, the extremists and the like are attracted to the radical. nativist, isolationist creed. This has all happened before.
I have faith that our democracy will survive these current upheavals. Misinformation is rife. I cannot believe what people share with me as gospel and fact. Can they cite facts and proof? No. But they know it is true. This was written about in 1954: The Paranoid Style in American Politics - Wikipedia Give the book a read. It's on my Kindle reader.
And then this, Watergate did not ruin Nixon's career. It took him out of the limelight and active politics. He was always an influence in GOP politics and was courted by many, domestic and foreign. Both Truman and Khrushchev were farmers. They did not mince words. They spoke clearly and directly. They both called Nixon a son-of-a-bitch.
Last edited:
JohnGellings
Well-known
Actually, writing the proper prompts to get what you want is not easy with AI either. It is nowhere near automated and takes a lot of practice. I’m not saying it is the same level of skill, but it is not easy either.Yes, but whereas bad actors used to have to write out their stories, or have some skill to photoshop fake images - now the process is automated. Things are going to be hundreds, if not thousands of times worse, because there is a technology that is basically specialized for the task of creating frauds and scams, and no technology specialized in keeping it in check.
That said, I completely agree with your latter point.
Last edited:
raydm6
Yay! Cameras! 🙈🙉🙊┌( ಠ_ಠ)┘ [◉"]
Not sure if any of you are aware of this?
petapixel.com
Especially 4.2:


Photographers Outraged by Adobe's New Privacy and Content Terms
Photographers and other creatives are livid.

Adobe General Terms of Use | Adobe Legal
Review Adobe's General Terms of Use, governing access to and use of Adobe products, services, and software, including product-specific terms and conditions.
www.adobe.com

Mos6502
Well-known
Well what do you call it when a company creates a product that takes from millions without credit or compensation? Is that not a rip off? What do you call it when a company claims their product can do something, but which in practice it either cannot do at all, or does poorly most of the time? Is that not a rip off? What would you say about a technology that's chief value is that it makes imitations or forgeries? What do you call it when a company sells a product to another company, on the promise that it will reduce costs, but immediately upon implementation that product performs so poorly that it brings lawsuits and causes lost time and money? Come to your senses.If you cannot prove this premise as anything other than opinion I will just dismiss it as idle chatter. If you cannot prove it it is artificial intelligence. While I have no proof it would not surprise me that some Luddites raged against the printing press. Automobiles were denounced as vehicles of sin. Progress of any sort is denounced. Whether AI is progress is not in doubt. It is the application that is in doubt, like gunpowder.
Are computers bad? Are lawyers bad? They cause no harm until they have a client. Radio has been used malevolently as has the press and TV. All forms of communication have been used malevolently. Simple speech can be malevolent. AI may make it easier. But just look at the venom and BS and lies spread on computers, especially as they got more "user friendly". Every ordinary idiot can get online and publish anything, and does. I think as the mouth gets more sophisticated so, too, will the ear.
It also amuses me that people will go "oh but everybody thought computers were bad" or "the internet was bad" when, on the contrary, the practical value of these was easily understood from the get go. The reason things like NFTs and AI get so much criticism is because they don't really provide much (if any) value. People like to bleat on about "luddites" when the simple truth is that all technology that provides value has been taken up as quickly as possible. Saying something that is a terrible product only gets criticized by "luddites" because it's "technology" is a cop out. It's an excuse to not have to think critically.
To paraphrase John Bloomfield Jervis, writing on the subject of the then newly invented steam locomotive: when something works, the beauty of the thing is obvious.
boojum
Ignoble Miscreant
Well what do you call it when a company creates a product that takes from millions without credit or compensation? Is that not a rip off? What do you call it when a company claims their product can do something, but which in practice it either cannot do at all, or does poorly most of the time? Is that not a rip off? What would you say about a technology that's chief value is that it makes imitations or forgeries? What do you call it when a company sells a product to another company, on the promise that it will reduce costs, but immediately upon implementation that product performs so poorly that it brings lawsuits and causes lost time and money? Come to your senses.
It also amuses me that people will go "oh but everybody thought computers were bad" or "the internet was bad" when, on the contrary, the practical value of these was easily understood from the get go. The reason things like NFTs and AI get so much criticism is because they don't really provide much (if any) value. People like to bleat on about "luddites" when the simple truth is that all technology that provides value has been taken up as quickly as possible. Saying something that is a terrible product only gets criticized by "luddites" because it's "technology" is a cop out. It's an excuse to not have to think critically.
To paraphrase John Bloomfield Jervis, writing on the subject of the then newly invented steam locomotive: when something works, the beauty of the thing is obvious.
I agree with you in part. I did not know specifically what Adobe was up to. But we are talking about Adobe in particular here, not AI generally. NFT's may be a scam. Caveat emptor. AI will be a great tool. Knives are great tools, too. They can be used for good or evil. The tool is not the problem, it is the user.
Gordon Moat
Established
Yes, many professionals unhappy with the rights grab attempt by Adobe. I think one of the movie studios first reported this online, and then it’s spread like wildfire. The tough part is that there is no all-in-one integrated solution to replace Creative Suite. One needs to find several tools, such as Capture One and Affinity, to replace Adobe products.Not sure if any of you are aware of this?
![]()
Photographers Outraged by Adobe's New Privacy and Content Terms
Photographers and other creatives are livid.petapixel.com
Especially 4.2:![]()
Adobe General Terms of Use | Adobe Legal
Review Adobe's General Terms of Use, governing access to and use of Adobe products, services, and software, including product-specific terms and conditions.www.adobe.com
View attachment 4838990
On a different point, some AI tools will be useful for businesses, by increasing productivity, or simply reducing their workforce. Consumer AI is a different matter, with perhaps some novelty uses, though that’s not the point, nor the path to monetization. Consumer AI is for data collection, which is how it will be monetized. Think of Consumer AI as the malware we tried for years to keep off our computers, and that AI companies now expect we will accept without reading the Terms Of Service. Consumer AI will be the largest data grab in internet history, unless/until the public backlash goes against it.
Mos6502
Well-known
It's not so much the "rights grab" which itself is industry standard (seriously browse the TOS of any service that stores your data on the "cloud" every single one of them is going to have you grant them rights to transmit, modify, make copies of, etc. everything you upload - cloud storage literally cannot work without these permissions) it's that Adobe wants to access and "review" that content which is highly unusual. On top of that Adobe is holding users hostage, they cannot access their work without agreeing to the TOS, and they cannot disagree with the TOS except to cancel their Adobe subscription, which usually has a (sometimes quite hefty) fee attached to it.
All I can say is, I saw this coming from miles away when Adobe switched to the subscription model. They've always been one of the worst players out there, overpriced, poor customer service, products that are nothing special in the first place, and now they want to have every customer over the barrel so they can make changes to the products or jack up fees any time, and they want you to pay them endlessly for it.
Unfortunately because of the underhanded way AI has been developed, that boilerplate about rights to transmit, modify, etc. content is suspect now, particularly for any company developing AI. Even though Adobe says they are not training their AI on user content, what really is there to stop them? And what good is their assurance that they aren't doing so?
All I can say is, I saw this coming from miles away when Adobe switched to the subscription model. They've always been one of the worst players out there, overpriced, poor customer service, products that are nothing special in the first place, and now they want to have every customer over the barrel so they can make changes to the products or jack up fees any time, and they want you to pay them endlessly for it.
Unfortunately because of the underhanded way AI has been developed, that boilerplate about rights to transmit, modify, etc. content is suspect now, particularly for any company developing AI. Even though Adobe says they are not training their AI on user content, what really is there to stop them? And what good is their assurance that they aren't doing so?
Last edited:
boojum
Ignoble Miscreant
I have avoided Adobe since they went subscription. I just did not like what it seemed to represent: that I was renting he software. I remember back to freeware and shareware and today operate in Linux where just about everything is free, like the JPG editor, GIMP. GIMP is available on other platforms. And I can edit RAW images in Linux editors, I forget which. GIMP does not change the original. It keeps the mods you made in a "sidecar" and you call that up to see the modded image. If you want a hard copy you can export one.
So, if you are worried about your images 1) Keep them on your own storage media and 2) Use editors which do not have multi-page EULA's. But no matter what you do once it is posted to the internet it is shared with the world. Anything today can be copied and shared.
So, if you are worried about your images 1) Keep them on your own storage media and 2) Use editors which do not have multi-page EULA's. But no matter what you do once it is posted to the internet it is shared with the world. Anything today can be copied and shared.
Mos6502
Well-known
I switched to Affinity and haven't looked back on any Adobe products. Pretty intuitive to learn, and once purchased, it's yours.
Share:
-
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.