• 0 Posts
  • 63 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle


  • I don’t think it makes sense to keep up with this “look what you idiots did” garbage i see so often because, as you said, it isn’t a guarantee that it would made a difference, unsolicitedly gloating at someone that their poor decision has lead to increased misery is gross and counterproductive, and it inherently reduces the ultimate culpability on Republican voters that did in fact directly vote for this and should all be held to account. That said, there is a good chance that between abstentions, third party protest votes, and the distribution of both across the critical EC votes, the movement may have tipped the scales–but there is really no way to know either way.


  • Yes, this is what I said. Situations where a work can conceivably considered co-authored by a human, those components get copyright. However, whether that activit constitutes contribution and how is demarcated across the work is a case by case basis. This doesn’t mean any inpainting at all renders the whole work copyright protected–it means that it could in cases where it is so granular and directly corresponds to human decision making that it’s effectively digital painting. This is probably a higher bar than most expect but, as is not atypical with copyright, is a largely case by case quantitative/adjudicated vibes-based determination.

    The second situation you quoted is also standard and effectively stands for the fact that an ordered compilation of individually copyrighted works may itself have its own copyright in the work as a whole. This is not new and is common sense when you consider the way large creative media projects work.

    Also worth mentioning that none of this obviates the requirement that registrations reasonably identify and describe the AI generated components of the work (presumably to effectively disclaim those portions). It will be interesting to see a defense raised that the holder failed to do so and so committed a fraud on the Copyright Office and thus lost their copyright in the work as a whole (a possible penalty for committing fraud on the Office).


  • The CO didn’t say AI generated works were copyrightable. In fact, the second part of the report very much affirmed their earlier decisions that AI generated content is necessarily not protected under copyright. What you are probably referring to is the discussion the Office presented about joint works style pieces–that is, where a human performed additional creative contributions to the AI generated material. In that case, the portions such that they were generated by the human contributor are protected under copyright as expected. Further, they made very clear that what constitutes creative contribution and thus gets coverage is determined on a case by case basis. None of this is all that surprising, nor does it refute the rule that AI generated material, having been authored by something other than a human, is not afforded any copyright protection whatsoever.


  • For sure. I personally think our current IP laws are well equipped to handle AI generated content, even if there are many other situations where they require a significant overhaul. And the person you responded to is really only sort of maybe half correct. Those advocating for, e.g., there to be some sort of copyright infringement in training AI aren’t going to bat for current IP laws-- they’re advocating for altogether new IP laws effectively thar would effectively further assetize and allow even more rent seeking in intangibles. Artists would absolutely not come out ahead on this and it’s ludicrous to think so. Publishing platforms would make creators sign those rights away and large corporations would be the only ones financially capable of acting in this new IP landscape. The compromise also likely would be attaching a property right in the model outputs and so it would actually become far more practical to leverage AI generated material at commercial scale since the publisher could enforce IP rights on the product.

    The real solution to this particular issue is require all models that out materials to the public at large be open source and all outputs distributed at large be marked as generated by AI and thus being effectively in the public domain.




  • It could of course go up to the scotus and effectively a new right be legislated from the bench, but it is unlikely and the nature of these models in combination with what is considered a copy under the rubric copyright in the US has operated effectively forever means that merely training and deploying a model is almost certainly not copyright infringement. This is pretty common consensus among IP attorneys.

    That said, a lot of other very obvious infringement in coming out in discovery in many of these cases. Like torrenting all the training data. THAT is absolutely an infringement but is effectively unrelated to the question of whether lawfully accessed content being used as training data retroactively makes its access unlawful (it really almost certainly doesn’t).


  • I wouldn’t even call it algorithm-driven myopia but rather myopically-designed algorithmic idiocy. It isn’t wildly challenging to design a filter to capture semantic context before recommending action on a piece of text, even if the underlying reasoning is wildly petty ratfuckery. It isn’t just the petty meandering cruelty with these dumb pieces of shit, though that’s certainly enough to merit outrage. It’s the combination with historic incompetence that just exponentially amplifies that outrage. Here’s to a mario party to roll in 2026.


  • Even in your latter paragraph, it wouldn’t be an infringement. Assuming the art was lawfully accessed in the first place, like by clicking a link to a publicly shared portfolio, no copy is being encoded into the model. There is currently no intellectual property right invoked merely by training a model-- if people want there to be, and it isn’t an unreasonable thing to want (though I don’t agree it’s good policy), then a new type of intellectual property right will need to be created.

    What’s actually baffling to me is that these pieces presumably are all effectively public domain as they’re authored by AI. And they’re clearly digital in nature, so wtf are people actually buying?


  • If you are “torn” on whether it is a good thing to grant a wealthy campaign donor unfettered and unquestionably illegal access to government and bureaucratic infrastructure, with zero accountability or oversight, and who has displayed absolutely zero competence at managing any public institution (and in fact has a record of incompetence at managing private enterprises), then I honestly think you’re one of the millions of Americans who just needs to fuck off and stop contributing to adult decision-making. You’re simply not up to the task.





  • AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.

    The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

    I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.


  • Summary judgement is not a thing separate from a lawsuit. It’s literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced “beyond a reasonable doubt” earlier. This is also not the standard used in (US) civil cases–it’s typically a standard consisting of the preponderance of the evidence.

    I’m also not sure what you mean by “court approved documentation.” Different jurisdictions approach contract law differently, but courts don’t “approve” most contracts–parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).


  • There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.