

More space trash from trash corporations…
Cool tech used for boring purposes.
He / They
More space trash from trash corporations…
Cool tech used for boring purposes.
the repetitive tasks that turn any job into a grind are prime candidates
The problem is, this varies from person to person. My team divvies (or did, I quit not too long ago) up tasks based on what different people enjoy doing more, and no executive would have any clue which repeating tasks are repetitive (in a derogatory way), and which ones are just us doing our job. I like doing network traffic analysis. My coworker likes container hardening. Both of those could be automated, but that would remove something we enjoy from each of our respective jobs.
A big move in recent AI company rhetoric is that AI will “do analyses”, and people will “make decisions”, but how on earth are you going to keep up the technical understanding needed to make a decision, without doing the analyses?
An AI saying, “I think this is malicious, what do you want to do?” isn’t a real decision if the person answering can’t verify or repudiate the analysis.
Its not an empty panic if you actually have real reasons why its harmful.
Every panic has ‘reasons’ why something is harmful. Whether they are valid reasons, proportional reasons, or reasons that matter, is up for interpretation.
First you’d need laws in place that determine how the social media algorithms should work, then we can talk.
Yes, then we can talk about banning systems that remain harmful despite corporate influence being removed. You’re still just arguing (by analogy) to ban kids from places where smoking adverts are until we fix the adverts.
companies ARE making it harmful, so it IS harmful
No, companies didn’t make social media harmful, they made specific aspects of social media harmful. You need to actually approach this with nuance and precision if you want to fix the root cause.
That, and there are various other reasons why its harmful
Every reason that’s been cited in studies for social media being harmful to kids (algorithmic steering towards harmful content, influencer impact on self-image in kids, etc) is a result of companies seeking profits by targeting kids. There are other harms as well, such as astroturfing campaigns, but those are non-unique to social media, and can’t be protected against by banning it.
Let me ask you upfront, do you believe that children ideally should not have access to the internet apart from school purposes (even if you would not mandate a ban)?
This is the newest ‘think of the children’ panic.
Yes, social media is harmful because companies are making it harmful. It’s not social media that’s the root cause, and wherever kids go next those companies will follow and pollute unless stopped. Social Isolation is not “safety”, it’s damaging as well, and social media is one of the last, freely-accessible social spaces kids have.
We didn’t solve smoking adverts for kids by banning kids from going places where the adverts were, we banned the adverts and penalized the companies doing them.
This got me curious, so I started digging into their documentation. It looks like you can currently stand up the appview backend as a local dev environment, but making it actually run as an alternative instance doesn’t appear to be possible (which is why no one is doing it).
There is only one instance, which is the company’s, because the company has not released the server software. It’s completely centralized.
That’s not what the current PR lays out, and I’m not going to give them preemptive credit for future maybes. Right now they’re just X v2.
Once they actually release the server software for self-hosting, i.e. once the app is actually at all even a little decentralized, and not just selling themselves on a feature that doesn’t exist, we can see how much decentralization the trusted reviewers have.
This neither centralizes nor decentralizes. It’s exactly just as centralized as before (which, as they are one company, is total).
Whether Bluesky issues a checkmark, or whether Bluesky tells someone else that they are trusted (by Bluesky), and thus can also issue them, Bluesky is the one who is in control of checkmarks.
Unless Bsky sets up some kind of decentralized council that they don’t control to manage this list, it’s just a form of deputization , and deputies are all subordinate to the ‘sheriff’.
Grants of revocable authority are not decentralization.
set of trusted authorities
Sounds like centralization to me. Who decides whether to vest authority in this group? Who selects the members of this group?
Unless there is some method for each host/ user to nominate members and it changes dynamically based on total votes at any given time, you’ve just permanently entrenched centralized authority in your (supposedly moving to) ‘decentralized’ app.
Takedown resistance is a natural consequence of decentralization, but it’s not decentralization itself.
Technical means to evade takedown like you’re describing also tend to add complexity which reduces usability, whereas language support reduces complexity for speakers of the supported languages.
I think this scoring system is a little haphazard, and should probably be divided into multiple separate, parallel scores. Takedown resistance needs its own score, based on ability to integrate with anonymization tools, ownership of codebase, accessibility and security of dependencies, etc.
I think this scoring system is missing Language Support as an important aspect of decentralization.
Centralization happens not just through commercial hosting (centralization of ownership), but even through self-hosters being in relatively centralized locations, limited jurisdictions, etc: an app with 300 self-hosted instances all located in one city (or even just all within 5 Eyes countries) is much easier to shut down than an app with those 300 spread across the globe, and language support is important to help facilitate that level of decentralization.
Not that unusual, unfortunately. The infosec community relies on researchers publishing PoC exploits in order for people to determine whether they’re affected or not by a given vulnerability, but that trust in PoCs can obviously be exploited.
Not everyone has the time or knowledge to develop their own PoCs, but you should definitely not use one if you can’t understand the PoC, which is unfortunately rather common.
Good writeup!
Definitely never good to run PoCs sight unseen; mostly not because of this kind of situation, but even just because different PoCs will have different results, and you need to know what to expect.
Also, if you see any level of obfuscation in PoC code, it’s more than likely malicious.
Gentle reminder that actions should not be discussed with strangers, and elevated actions should never be discussed online/ digitally at all.
Inside China, such a network of large-scale AGI [Artificial Generative Intelligence] systems could autonomously improve repression
Wooooow.
AGI stands for Artificial GENERAL Intelligence not generative. Nice attempt to muddy the waters to confuse and scare people, given that much writing on AGI will talk about how dangerous it is.
“The terrorists are in possession of an A-Bomb [Asbestos Bomb]!”
Either this opinion author has no clue, or this is very deliberate misinformation.
Also: This ‘College Protester’ Isn’t Real. It’s an AI-Powered Undercover Bot for Cops So using ML to police citizens is literally already here.
Frankly, I think that fears about “continuity of consciousness” is jumping the gun a little as an objection to current AI. Water usage, Capitalism, and asymmetry of information creation/ spread is much more pressing, even in the medium to long term.
we have plenty of issues
I would venture to say that despite those issues, thanks to y’all’s moderation this space is non-toxic on the whole. It may be that size is a de-facto limit on maintaining a space like Beehaw, or it may be that we (as in, internet users) just haven’t figured out the best format/ structure for scaling up safely.
I think a microblogging platform that allows moderated, invite-only sub-groups (and which doesn’t show you any posts by users or groups you don’t subscribe to) could be a good step towards that. Sort of a combination of BlueSky feed + Beehaw communities/ FB groups. That could give you a Beehaw-like moderation experience in a microblog platform.
I think most microblogging platforms’ failure in this area likely stems from them being ad and engagement-driven, and their corporate “need” for users to be more and more active across “interest domains”, clashing with their users’ need to stay isolated from users who are toxic to them.
IMO, toxicity isn’t as much about who comes into your environment as it is about who you allow to remain. There are plenty of low- and non-toxic spaces online, they’re just heavily moderated.
So the article uses one competition doing this to assert this as “Chinese hacking competitions”. There are tens if not hundreds of hackathons in China.
Please stop posting these heavily biased or misleading articles about China from questionable sites.
We get it, you don’t like China. We got that after the first 50 posts about China being bad. Most of us don’t like the CCP either.
But at least post reputable sources that don’t push agendas quite so blatantly.
For anyone interested, this site (firstpost.com) is an english-language Indian news site owned by Network18, a news conglomerate with a right-leaning, pro-Modi bias.