This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
Yeah I was thinking exactly this.
It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?
Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.
The key result
When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters
If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?
I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.
Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.
propaganda matters.
Yes. Much more than we peasants all realized.
Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…
Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.
Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject
Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.
This experiment is also nearly worthless because, as proved by the researchers, there’s no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.
?!!? Before genAI it was hires human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.
Humanity adapts to survive and survives to adapt. We’ll figure some shit out
The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.
Just a few months ago it was literally Meta itself…
Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.
This just shows how gullible and stupid the average Reddit user is. There’s a reason that there’s so many meme’s mocking them and calling them beta soyjacks.
It’s kind of true.
Judging by your comment history, you are the beta soyjack.
It’s true.
Holy Shit… This kind of shit is what ultimately broke Tim kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…
And that’s how you get the Unabomber folks.
I don’t condone what he did in any way, but he was a genius, and they broke his mind.
Listen to The Last Podcast on the Left’s episode on him.
A genuine tragedy.
Ted, not Tim.
The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.
It would be naive to think this isn’t already in widespread use.
To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.
Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.
I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.
You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…
Nice try, AI
😄
I don’t believe you.
Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.
Social media broke so many people’s brains
Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.
I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.
Everyone who disagrees with you is a bot, probably from Russia. You are very smart.
Do you still think you’re going to be allowed to vote for the next president?
… and a .ml user pops out from the woodwork
Everyone who disagrees with you is a bot
I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?
Sure, but you seem to be under the impression the only bots are the people that disagree with you.
There’s nothing stopping bots from grooming you by agreeing with everything you say.
I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.
e-greeting cards
Haven’t even thought about them in what seems like a quarter of a century.
There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.
With this picture, does that make you Cyrano de Purrgerac?
I have it on good authority that everyone on Lemmy is a bot except you.
Beep boop
There’s no guarantee anyone on there (or here) is a real person or genuine.
I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.
The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?
I’ve worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.
Some of us have known the internet has been dead since 2014
Shall we talk about Eglin Airforce base or Jessica Ashoosh?
I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings
Russia has been using LLM based social media bots for quite a while now
4chan is surely filled with glowie experiments like this.
I’m conflicted by that term. Is it ok that it’s been shortened to “glow”?
Conflict? A good image is a good image regardless of its provenance. And yes 2020s era 4chan was pretty much glowboy central, one look at the top posts by country of origin said as much. It arguably wasn’t worth bothering with since 2015
I’m sure there are individuals doing worse one off shit, or people targeting individuals.
I’m sure Facebook has run multiple algorithm experiments that are worse.
I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)
The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.
that’s right, no reason to do anything about it. let’s just continue to fester in our own shit.
That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.
sounded really dismissive to me.
If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.
This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.
AI posts or just creative writing assignments.
Right. Subs like these are great fodder for people who just like to make shit up.
ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.
It could, if it annoumced itself as such.
Instead it pretended to be a rape victim and offered “its own experience”.
Blaming a language model for lying is like charging a deer with jaywalking.
Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.
Which, in an ideal world, is why AI generated comments should be labeled.
I always break when I see a deer at the side of the road.
(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)
I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.
That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?
I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.
If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.
I was unaware that “Internet Ethics” was a thing that existed in this multiverse
No - it’s research ethics. As in you get informed consent. It just involves the Internet.
If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.
This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.
Bad ethics are still ethics.