He / They

  • 3 Posts
  • 411 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Chinese hacking competitions (plural) are different

    A 2018 rule mandates participants of the Tianfu Cup (singular) to hand over their findings to the government

    This approach effectively turned hacking competitions (plural)

    So the article uses one competition doing this to assert this as “Chinese hacking competitions”. There are tens if not hundreds of hackathons in China.

    Please stop posting these heavily biased or misleading articles about China from questionable sites.

    We get it, you don’t like China. We got that after the first 50 posts about China being bad. Most of us don’t like the CCP either.

    But at least post reputable sources that don’t push agendas quite so blatantly.

    For anyone interested, this site (firstpost.com) is an english-language Indian news site owned by Network18, a news conglomerate with a right-leaning, pro-Modi bias.



  • the repetitive tasks that turn any job into a grind are prime candidates

    The problem is, this varies from person to person. My team divvies (or did, I quit not too long ago) up tasks based on what different people enjoy doing more, and no executive would have any clue which repeating tasks are repetitive (in a derogatory way), and which ones are just us doing our job. I like doing network traffic analysis. My coworker likes container hardening. Both of those could be automated, but that would remove something we enjoy from each of our respective jobs.

    A big move in recent AI company rhetoric is that AI will “do analyses”, and people will “make decisions”, but how on earth are you going to keep up the technical understanding needed to make a decision, without doing the analyses?

    An AI saying, “I think this is malicious, what do you want to do?” isn’t a real decision if the person answering can’t verify or repudiate the analysis.


  • Its not an empty panic if you actually have real reasons why its harmful.

    Every panic has ‘reasons’ why something is harmful. Whether they are valid reasons, proportional reasons, or reasons that matter, is up for interpretation.

    First you’d need laws in place that determine how the social media algorithms should work, then we can talk.

    Yes, then we can talk about banning systems that remain harmful despite corporate influence being removed. You’re still just arguing (by analogy) to ban kids from places where smoking adverts are until we fix the adverts.

    companies ARE making it harmful, so it IS harmful

    No, companies didn’t make social media harmful, they made specific aspects of social media harmful. You need to actually approach this with nuance and precision if you want to fix the root cause.

    That, and there are various other reasons why its harmful

    Every reason that’s been cited in studies for social media being harmful to kids (algorithmic steering towards harmful content, influencer impact on self-image in kids, etc) is a result of companies seeking profits by targeting kids. There are other harms as well, such as astroturfing campaigns, but those are non-unique to social media, and can’t be protected against by banning it.

    Let me ask you upfront, do you believe that children ideally should not have access to the internet apart from school purposes (even if you would not mandate a ban)?


  • This is the newest ‘think of the children’ panic.

    Yes, social media is harmful because companies are making it harmful. It’s not social media that’s the root cause, and wherever kids go next those companies will follow and pollute unless stopped. Social Isolation is not “safety”, it’s damaging as well, and social media is one of the last, freely-accessible social spaces kids have.

    We didn’t solve smoking adverts for kids by banning kids from going places where the adverts were, we banned the adverts and penalized the companies doing them.





  • This neither centralizes nor decentralizes. It’s exactly just as centralized as before (which, as they are one company, is total).

    Whether Bluesky issues a checkmark, or whether Bluesky tells someone else that they are trusted (by Bluesky), and thus can also issue them, Bluesky is the one who is in control of checkmarks.

    Unless Bsky sets up some kind of decentralized council that they don’t control to manage this list, it’s just a form of deputization , and deputies are all subordinate to the ‘sheriff’.

    Grants of revocable authority are not decentralization.





  • Not that unusual, unfortunately. The infosec community relies on researchers publishing PoC exploits in order for people to determine whether they’re affected or not by a given vulnerability, but that trust in PoCs can obviously be exploited.

    Not everyone has the time or knowledge to develop their own PoCs, but you should definitely not use one if you can’t understand the PoC, which is unfortunately rather common.







  • we have plenty of issues

    I would venture to say that despite those issues, thanks to y’all’s moderation this space is non-toxic on the whole. It may be that size is a de-facto limit on maintaining a space like Beehaw, or it may be that we (as in, internet users) just haven’t figured out the best format/ structure for scaling up safely.

    I think a microblogging platform that allows moderated, invite-only sub-groups (and which doesn’t show you any posts by users or groups you don’t subscribe to) could be a good step towards that. Sort of a combination of BlueSky feed + Beehaw communities/ FB groups. That could give you a Beehaw-like moderation experience in a microblog platform.

    I think most microblogging platforms’ failure in this area likely stems from them being ad and engagement-driven, and their corporate “need” for users to be more and more active across “interest domains”, clashing with their users’ need to stay isolated from users who are toxic to them.