Member request: please report any suspected AI Posts.

It seems unlikely that reliance on forum users to detect and flag AI-created posts will be effective or efficient. There are detection tools available, but they need to be implemented by the platform providers. Embedded signatures identifying AI content would be helpful, but making that happen will require regulatory oversight which the current political climate is unlikely to facilitate.
 
I disagree, AI is easy to spot and flag.

If anyone here suspects AI, simply report it, and then additional eyeballs can evaluate it too, perhaps for a second time...

Bear in mind for an AI post to appear in the first place, it's going to be posted by a newly registered account which means the mods will have already seen it.

So an RFF member report only applies if the post in question was not closely inspected by a mod before being approved, which does happen from time to time. 😆

This request is just a second line of defense.
 
I can't speculate on AI generated forum posts specifically, but I can say that, in general, the reason why AI is showing up everywhere in everything is because some Silicon Valley made a big bet on this technology and have been trying to hype it up as much as possible to maintain their investment. The more they hype it up, the more it gets adopted by other company executives who don't think critically and don't know any better as a way to reduce labor costs. The AI industry is not profitable. It's a speculative bubble and the goal has been to saturate the world with it until it becomes too big to fail in the eyes of governments. Ever the good libertarians, they all want government money to prop up their unprofitable investments, whether it be through military and intelligence agency contracts or through direct bailouts.

AI is garbage and it shouldn't have a place on forums. We're one of the last vestiges of the internet as it used to exist, before search engines and social media companies and now AI concentrated and centralized web traffic onto a small handful of platforms. We should keep our little space clean so that we can have productive discussions about our old, often analogue hobby.
This is very interesting. We have proven how gullible we are on carbon and other things. Geoffrey Hinton in his Nobel lecture expressed grave concerns, and one discussion I saw recently speculated, on how quick a study AI tools are for Python, which could lead to an independent computer langugage being generated that shuts us out. Maybe.

Terence Tao, the Fields Medalist from Adelaide, is sceptical about genuine "artificial general intelligence" being within reach of current AI tools, but concedes a sort of "artificial general cleverness" is emerging that can be useful. He observes: "These means may be stochastic or the result of brute force computation; they may be ungrounded or fallible; and they may be either uninterpretable, or traceable back to similar tricks found in an AI's training data."


One such training data flaw was explored by Hiroko Konishi in an online paper, Structural Inducements for Hallucination in Large Language Models.

I read of a PhD supervisor who looked at the most recent chapter of his student, where some neat arguments were laid out, with extensive references. The text was plausible but banal and took the matter nowwhere really, and the references with page numbers and DOI's all complete in the appropriate style were, a lot of them, fabrications.

In Australia, apart from the electricity we soon won't have, the amount of water required for running AI data centres for every user tapping in makes it all sound extremely limited in lifespan.

Bill Nighy on procrastination recalled a Peter Cook routine where one depressed drinker greets another at the bar, "I'm writing a novel..." "Yeah, neither am I." Stuff like that will get us past the door, with the AI bots left on the street in the rain.
 
This is very interesting. We have proven how gullible we are on carbon and other things. Geoffrey Hinton in his Nobel lecture expressed grave concerns, and one discussion I saw recently speculated, on how quick a study AI tools are for Python, which could lead to an independent computer langugage being generated that shuts us out. Maybe.

Terence Tao, the Fields Medalist from Adelaide, is sceptical about genuine "artificial general intelligence" being within reach of current AI tools, but concedes a sort of "artificial general cleverness" is emerging that can be useful. He observes: "These means may be stochastic or the result of brute force computation; they may be ungrounded or fallible; and they may be either uninterpretable, or traceable back to similar tricks found in an AI's training data."


One such training data flaw was explored by Hiroko Konishi in an online paper, Structural Inducements for Hallucination in Large Language Models.

I read of a PhD supervisor who looked at the most recent chapter of his student, where some neat arguments were laid out, with extensive references. The text was plausible but banal and took the matter nowwhere really, and the references with page numbers and DOI's all complete in the appropriate style were, a lot of them, fabrications.

In Australia, apart from the electricity we soon won't have, the amount of water required for running AI data centres for every user tapping in makes it all sound extremely limited in lifespan.

Bill Nighy on procrastination recalled a Peter Cook routine where one depressed drinker greets another at the bar, "I'm writing a novel..." "Yeah, neither am I." Stuff like that will get us past the door, with the AI bots left on the street in the rain.

AIs are useful for doing guided research - they're effectively a replacement for a research librarian. This is how companies like McKinzie consulting and many law firms are using them.

They are not a replacement for critical thinking and human creativity or answering teleological questions of meaning.

Long before AIs, though, these skills lost importance in "education" as the academy morphed into full time social agenda peddling and political indoctrination. So AIs are just finishing the damage started by the universities starting in the 1960s.
 
Last edited:
AIs are useful for doing guided reasearch - they're effectively a replacement for a research librarian. This how companies like McKinzie consulting and many law firms are using them.

They are not a replacement for critical thinking and human creativity or answering teleological questions of meaning.

Long before AIs, though, these skills lost importance in "education" as the academy morphed into full time social agenda peddling and political indoctrination. So AIs are just finishing the damage started by the universities starting in the 1960s.
Long ago Thomas Hobbes had it: "The universities are to the nation, as the wooden horse was to the Trojans."
 
Long ago Thomas Hobbes had it: "The universities are to the nation, as the wooden horse was to the Trojans."
And my take on this, is that today, especially in relation to University Social Sciences disciplines (note that I would normally type this as Social "Science" "disciplines" signifying that they are neither of these things), finding a GOOD university (i.e. one that does not indoctrinate but rather teaches thinking skills and provides deep information) is about as a rare as finding wooden horse shit.

I recall vividly one experience I had while studying industrial law at Uni (this was back in the 1970's) our law "professor" brought along a mate of his - a Marxist / Maoist "professor" who lectured us at length about why Mao was master of the universe (or something) and that his "cultural revolution" (which still was oppressing the Chinese population) was the best thing that had occurred in the 20th Century. Clearly this had nothing to do with industrial law or anything else in our degree, but was merely a chance to spread Maoist propaganda to an audience that could not escape. We had to sit through two or three of these indoctrinations (which wasted our time and energy) but our studies being in a Bachelor of Business Degree and thus, back then at least most of us were still reasonably conservative, I doubt he had any converts. But it was a dire sign of things to come both in Unis and in wider society. If the benefits of Marxism were so great propaganda would not be needed. My dad lived under it for a time in Hungary - he had the good sense to escape to Australia by sneaking across the border, making his way to Hamburg, gaining (legal) immigration status and hopping on a boat to Australia. His opinion of communism held much more sway with me than any commie so called Professor.

I fear that AI will be but one more way to propagandize. At least we have "Grok" and AI mechanism owned and run by Elan Musk. That gives some small hope.
 
Last edited:
And my take on this, is that today, especially in relation to University Social Sciences disciplines (note that I would normally type this as Social "Science" "disciplines" signifying that they are neither of these things), finding a GOOD university (i.e. one that does not indoctrinate but rather teaches thinking skills and provides deep information) is about as a rare as finding wooden horse shit.

I recall vividly one experience I had while studying industrial law at Uni (this was back in the 1970's) our law "professor" brought along a mate of his - a Marxist / Maoist "professor" who lectured us at length about why Mao was master of the universe (or something) and that his "cultural revolution" (which still was oppressing the Chinese population) was the best thing that had occurred in the 20th Century. Clearly this had nothing to do with industrial law or anything else in our degree, but was merely a chance to spread Maoist propaganda to an audience that could not escape. We had to sit through two or three of these indoctrinations (which wasted our time and energy) but our studies being in a Bachelor of Business Degree and thus, back then at least most of us were still reasonably conservative, I doubt he had any converts. But it was a dire sign of things to come both in Unis and in wider society. If the benefits of Marxism were so great propaganda would not be needed. My dad lived under it for a time in Hungary - he had the good sense to escape to Australia by sneaking across the border, making his way to Hamburg, gaining (legal) immigration status and hopping on a boat to Australia. His opinion of communism held much more sway with me than any commie so called Professor.

I fear that AI will be but one more way to propagandize. At least we have "Grok" and AI mechanism owned and run by Elan Musk. That gives some small hope.

The elites learned their lessons from the collectivists of the 20th Century in Germany, Russia, and China. First tribalize society with a grievance agenda, then set the tribes upon each other, seizing absolute power in the process. Two generations on, the culture was lost.

My family is also formerly of that world. Collectivism kills.
 
Last edited:
I think the thing that bothers me the most about the AI wave (you say you want a revolution...) is that it's being driven by profit. Not sure there's a good alternative but I see ugly ahead at several turns.

Markets, capitalism, and profit are all fine as long as there is no fraud and goverment isn't picking winners and losers.
 
And The Simpsons. Every week I’m paraphrasing Homer - his “Doughnuts. Is there nothing they can’t do?” Or quoting directly if someone is non-responsive at table: “Can’t talk. Eating.”
 
Open the pod bay door, H.A.L. 😕


I just realized this is kind of already happening in a way. Our phones, our tablet computers and maybe our desktop computers (not sure of the last) and most certainly devices / software like "Siri" and "Alexa" which are both in phones and in stand alone devices already listen carefully and quietly in the background to pick up trigger words used by us in ordinary conversation, when going about our normal daily lives. In the case of phones it is my understanding that this occurs even when the phone is in sleep mode - it is not actually sleeping, but quietly eaves dropping on us - listening. At the moment they are only using this info to do things like serve up adverts, but in the future - I wonder. But even now I find it creepy when perhaps I say something like "I am hungry, I feel like a pizza." Only to find that when I next turn the phone on there is a list of local pizza places advertising their wares. I think there may be ways to turn this functionality off but the phone makers do not make it easy or obvious how to do it. They are after that glorious, lovely advertising revenue and will do anything to get it. But I wonder how long it will be before Government authorities are tapping into this in their role as "thought police". In fact they already do - there are many cases where a online suspect's search history is tracked to find out anything incriminating. The same info can be used by totalitarian governments to embarrass or implicate everyday citizens they wish to shut down on trivial or even fake charges. How much longer might it be before such AI is similarly used to automatically alert authorities when it overhears us saying (i.e. thinking) something unapproved?

Maybe they already are as we know that even several years ago they were using phone GPS data to track us if they want to say find the names of ANYONE who has been in or near an area where a crime has been committed. (Think January 6th) Even if we did not break the law, we could find ourselves having to defend our liberty and/or reputation because of AI even though it is still in a relatively early form. I for one have no intention of breaking the law but it is not unusual for overly zealous or corrupt law officers to be suspicious of law-abiding citizens. Or to affect suspicion because it is convenient to do so. Of course sometimes this works in the interest of justice - several years ago a murder of a young woman walking a short distance home at night was solved in just this way - a known serial offender was picked up on street surveillance video following the woman and later his car was tracked by his phone's GPS to the place where her body was later found many miles away, providing pretty much an iron clad case.
 
Last edited:
I just realized this is kind of already happening in a way. Our phones, our tablet computers and maybe our desktop computers (not sure of the last) and most certainly devices / software like "Siri" and "Alexa" which are both in phones and in stand alone devices already listen carefully and quietly in the background to pick up trigger words used by us in ordinary conversation, when going about our normal daily lives. In the case of phones it is my understanding that this occurs even when the phone is in sleep mode - it is not actually sleeping, but quietly eaves dropping on us - listening. At the moment they are only using this info to do things like serve up adverts, but in the future - I wonder. But even now I find it creepy when perhaps I say something like "I am hungry, I feel like a pizza." Only to find that when I next turn the phone on there is a list of local pizza places advertising their wares. I think there may be ways to turn this functionality off but the phone makers do not make it easy or obvious how to do it. They are after that glorious, lovely advertising revenue and will do anything to get it. But I wonder how long it will be before Government authorities are tapping into this in their role as "thought police". In fact they already do - there are many cases where a online suspect's search history is tracked to find out anything incriminating. The same info can be used by totalitarian governments to embarrass or implicate everyday citizens they wish to shut down on trivial or even fake charges. How much longer might it be before such AI is similarly used to automatically alert authorities when it overhears us saying (i.e. thinking) something unapproved?

Maybe they already are as we know that even several years ago they were using phone GPS data to track us if they want to say find the names of ANYONE who has been in or near an area where a crime has been committed. (Think January 6th) Even if we did not break the law, we could find ourselves having to defend our liberty and/or reputation because of AI even though it is still in a relatively early form. I for one have no intention of breaking the law but it is not unusual for overly zealous or corrupt law officers to be suspicious of law-abiding citizens. Or to affect suspicion because it is convenient to do so. Of course sometimes this works in the interest of justice - several years ago a murder of a young woman walking a short distance home at night was solved in just this way - a known serial offender was picked up on street surveillance video following the woman and later his car was tracked by his phone's GPS to the place where her body was later found many miles away, providing pretty much an iron clad case.


In a totalitarian state, the law is written to give power to those in charge. The issue is not the technology, but those willing to give up their freedom for comfort and convenience. Most despots in at least fairly recent history (the last couple of hundred years or so) came to power with the willing support of the masses. For example, at the end of the day, Adolph was elected.

It is quite possible to not be at the beck and call of the technology. It's just inconvenient.
 
In a totalitarian state, the law is written to give power to those in charge. The issue is not the technology, but those willing to give up their freedom for comfort and convenience. Most despots in at least fairly recent history (the last couple of hundred years or so) came to power with the willing support of the masses. For example, at the end of the day, Adolph was elected.

It is quite possible to not be at the beck and call of the technology. It's just inconvenient.
I agree that AI is no more than a tool. But it is one with its own unique and potential dangers - and this includes the danger of misuse by corrupt and despotic people. Technically software already has the ability to listen in, hear us say "I like Ritz crackers" and then serve up unasked for ads about where I can buy Ritz crackers locally. How long before it hears us say "I hate this damned government" and then serves up a police surveillance squad to investigate our "seditious" activities. And that is without using AI. How much worse could it be with AI tools. People who are not at least a little worried about this should watch the German film "Other People's Lives" from the 1990's (the film is regarded as being quite authentic and accurate in terms of what was actually happening there before the fall of communism). It is about how in East Germany during the Soviet era Stasi routinely surveilled quite ordinary people by bugging their homes to get something on them or keep them in line. Not because they were known to be doing something wrong but rather because they might do something wrong - or say something wrong. Those being surveilled in the movie were actors - a husband and wife. Avant Garde types who could not be relied upon to follow the party line so they were watched. Back then, this was done by having a Stasi operative in an attic in a house over the road with a line to the house of whoever was being watched. How much easier it would be with ordinary home computer stuff in every home being used to surveil citizens by an AI empowered system which could pick up and filter the "Juicy bits" to be passed along to investigators. Sounds paranoid? Yep, it does. But we know from that film (and a reading of history) that it did already happen - at least in that more primitive manner available at the time. Here in Oz, we are already being threatened by an "Online Safety" Law which it designed to listen into our online activities and do pretty much this kind of stuff. And of course in this context "safety" means politicians' safety not ours and "safety" does not mean safety from physical harm - it means safety from people thinking unapproved thoughts. If that is bad look at what is happening in Britain where already people are being jailed for making an "offensive" comment in a PRIVATE message no less while online. Now imagine an online safety law further empowered by AI and used to actively relentlessly sleeplessly seek out indiscretions to be used to repress us.
 
Last edited:
If what we are told is true as AI gets smarter it will become harder, perhaps impossible to detect.

I suspect we are doomed...

Chris
I just watched an IG video by a guy who gave instructions for how to make AI generated images look more realistic. He specifically said that AI at the moment makes skin and hair flawless, and won't introduce imperfections without direction. So he generates images using one LLM (Higgsoft) and runs it through another with prompts for pores, wrinkles, burst capillaries, coarse facial hair etc. The results look a lot more like real people than basic LLM images. They still look like AI, but this gap will close sooner than we think. Some images I can't tell, and that's scary.
 
Back
Top Bottom