

This is such an important distinction. Current AI is incapable of wanting to cause any of that harm, yet it’s already happening. The danger won’t be skynet, it will be and always has been human greed and ignorance.
The only thing you have to fear.
This is such an important distinction. Current AI is incapable of wanting to cause any of that harm, yet it’s already happening. The danger won’t be skynet, it will be and always has been human greed and ignorance.
Hmm, this makes me think of the tradition on certain parts of the internet where people publicly announce the name and crime of this convicted rapist. They’ll explain where he’s currently living, the name he’s trying to go by, and bars he was seen at. This activity seems to stem from the outrage at the excessive leniency he was shown by the judge, although could also be protecting other potential victims.
I wonder if this kind of vigilante doxxing would fall under the scope of such a law, especially when his name is already in so many publications.
First the Streisand effect led to her home. Now it leads to her entire discography. Poor Barbara Streisand.
Yeah, I took a look at the code they used in the article that might help someone generate functional attacks. A rando experimenting without permission would likely get banned from the service.
I just tried this on ChatGPT, it doesn’t work.
Don’t give France any ideas.
They’re trying so hard to lose my patronage, but they forgot I cancelled last year.
Your assessment seems spot on to me. I’m connecting some projected dots to late stage capitalism. Perhaps the AIs will trickle down and such if we hold off on regulations.
Of course it’s possible for the government to impose regulations without sticking their face in and motorboating the AI’s contents. Google, Microsoft et al. would love to prevent this from happening because they actually do have their faces in there.