Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 804 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle
  • But you’re claiming that this knowledge cannot possibly be used to make a work that infringes on the original.

    I am not. The only thing I’ve been claiming is that AI training is not copyright violation, and the AI model itself is not copyright violation.

    As an analogy, you can use Photoshop to draw a picture of Mario. That does not mean that Photoshop is violating copyright by existing, and Adobe is not violating copyright by having created Photoshop.

    You claimed that AI training is not even in the domain of copyright, which is different from something that is possibly in that domain, but is ruled to not be infringing.

    I have no idea what this means.

    I’m saying that the act of training an AI does not perform any actions that are within the realm of the actions that copyright could actually say anything about. It’s like if there’s a law against walking your dog without a leash, and someone asks “but does it cover aircraft pilots’ licenses?” No, it doesn’t, because there’s absolutely no commonality between the two subjects. It’s nonsensical.

    Honestly, none of your responses have actually supported your initial position.

    I’m pretty sure you’re misinterpreting my position.

    The “copyright situation” regarding an actual literal picture of Mario doesn’t need to be fixed because it’s already quite clear. There’s nothing that needs to change to make an AI-generated image of Mario count as a copyright violation, that’s what the law already says and AI’s involvement is irrelevant.

    When people talk about needing to “change copyright” they’re talking about making something that wasn’t illegal previously into something that is illegal after the change. That’s presumably the act of training or running an AI model. What else could they be talking about?



  • Yes, that’s what I said. There are no “additional restrictions” from having a GPL license on something. The GPL license works by giving rights that weren’t already present under the default copyright. You can reject the GPL on an open sourced piece of software if you want to, but then you lose the additional rights that the GPL gives you.


  • I’d say it can be a problem because there have been examples of getting AIs to spit out entire copyrighted passages.

    Examples that have turned out to either be a result of great effort to force the output to be a copy, a result of poor training techniques that result in overfitting, or both combined.

    If this is really such a straightforward case of copyright violation, surely there are court cases where it’s been ruled to be so? People keep arguing legality without ever referencing case law, just news articles.

    Furthermore, some works can have additional restrictions on their use. I couldn’t for example train an AI on Linux source code, have it spit out the exact source code, then slap my own proprietary commercial license on it to bypass GPL.

    That’s literally still just copyright. There’s no “additional restrictions” at play here.


  • Learning what a character looks like is not a copyright violation. I’m not a great artist but I could probably draw a picture that’s recognizably Mario, does that mean my brain is a violation of copyright somehow?

    Yet evidence supports it, while you have presented none to support your claims.

    I presented some, you actually referenced what I presented in the very comment where you’re saying I presented none.

    You can actually support your case very simply and easily. Just find the case law where AI training has been ruled a copyright violation. It’s been a couple of years now (as evidenced by the age of that news article you dug up), yet all the lawsuits are languishing or defunct.




  • That article is over a year old. The NYT case against OpenAI turned out to be quite flimsy, their evidence was heavily massaged. What they did was pick an article of theirs that was widely copied across the Internet (and thus likely to be “overfit”, a flaw in training that AI trainers actively avoid nowadays) and then they’d give ChatGPT the first 90% of the article and tell it to complete the rest. They tried over and over again until eventually something that closely resembled the remaining 10% came out, at which point they took a snapshot and went “aha, copyright violated!”

    They had to spend a lot of effort to get that flimsy case. It likely wouldn’t work on a modern AI, training techniques are much better now. Overfitting is better avoided and synthetic data is used.

    Why do you think that of all the observable patterns, the AI will specifically copy “ideas” and “styles” but never copyrighted works of art?

    Because it’s literally physically impossible. The classic example is Stable Diffusion 1.5, which had a model size of around 4GB and was trained on over 5 billion images (the LAION5B dataset). If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.

    AIs don’t seem to be able to distinguish between abstract ideas like “plumbers fix pipes” and specific copyright-protected works of art.

    This is simply incorrect.



  • FaceDeer@fedia.iotoTechnology@lemmy.worldWhy I don't use AI in 2025
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    23 hours ago

    Don’t make “profiteering AI companies” pay for UBI. Make all companies pay for UBI. Just tax their income and turn it around into UBI payments.

    One of the major benefits of UBI is how simple it is. The simpler the system is the harder it is to game it. If you put a bunch of caveats on which companies pay more or pay less based on various factors, then there’ll be tons of faffing about to dodge those taxes.


  • FaceDeer@fedia.iotoTechnology@lemmy.worldWhy I don't use AI in 2025
    link
    fedilink
    arrow-up
    15
    arrow-down
    14
    ·
    23 hours ago

    Copyright, yes it’s a problem and should be fixed.

    No, this is just playing into another of the common anti-AI fallacies.

    Training an AI does not do anything that copyright is even involved with, let alone prohibited by. Copyright is solely concerned with the copying of specific expressions of ideas, not about the ideas themselves. When an AI trains on data it isn’t copying the data, the model doesn’t “contain” the training data in any meaningful sense. And the output of the AI is even further removed.

    People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright. Or rather by some other entirely new type of IP protection, since as I said this is nothing at all like what copyright already deals with. This would be an utterly terrible thing for culture and free expression in general if it were to come to pass.

    I get where this impulse comes from. Modern society has instilled a general sense that everything has to be “owned” by someone, even completely abstract things. Everyone thinks that they’re owed payment for everything that they can possibly demand payment for, even if it’s something that just yesterday they were doing purely for fun and releasing to the world without a care. There’s this base impulse of “mine! Therefore I must control it!” Ironically, it’s what leads to the capitalist hellscape so many people are decrying at the same time they demand more.








  • Ah, this is that Daenerys bot story again? It keeps making the rounds, always leaving out a lot of rather important information.

    The bot actually talked him out of suicide multiple times. The kid was seriously disturbed and his parents were not paying the attention they should have been to his situation. The final chat before he committed suicide was very metaphorical, with the kid saying he wanted to “join” Daenerys in West World or wherever it is she lives, and the AI missed the metaphor and roleplayed Daenerys saying “sure, come on over” (because it’s a roleplaying bot and it’s doing its job).

    This is like those journalists that ask ChatGPT “if you were a scary robot how would you exterminate humanity?” And ChatGPT says “well, poisonous gasses with traces of lead, I guess?” And the journalists go “gasp, scary robot!”