I changed the naming to “engagement poisening”, after you and several other commenters correctly noted that while over-optimization for engagement metrics is a component of “enshittification,” it is not sufficient on its own to be called as “enshittification”. I have updated the naming accordingly.
You are making a good point here with the strict definition of “Enshittification”. But in your opinion, what is it then? OpenAI is diluting the quality of its answers with unnecessary clutter, prioritizing feel-good style over clarity to cater to user’s ego. What would you call the stage where usefulness is sacrificed for ease of consumption, like when Reddit’s layout started favoring meme-style content to boost engagement?
So, just to be clear, you modified the system instructions with the mentioned “Absolute Mode” prompt, and ChatGPT was still so wordy on your account?
Can you tell one or two of those questions to counter-check?
Just to give an impression of how the tone will change after applying the above mentioned custom instructions:
OpenAI aims to let users feel better, catering the user’s ego, on the costs of reducing the usefulness of the service, rather than getting the message across directly. Their objective is to keep more users on the cost of reducing the utility for the user. It is enshittification in a way, from my point of view.
I agree that the change in tone is only a slight improvement. The content is mostly the same. The way information is presented does affect how it is perceived though. If the content is buried under a pile of praise and nice-worded sentences, even though the content is negative, it is more likely I’ll misunderstand or take some advice less serious, so not to the degree as it was meant to be, just to let me as a user feel comfortable. If an AI is too positive in its expression just to make me as a user prefer it over another AI, even though it would be better to tell me the facts straight forward, it’s only for the benefit of OpenAI (as in this case), and not for the user. I gotta say that is what Grok is better at, it feels more direct and not talking around the facts, it gives clearer statements despite its wordiness. It’s the old story of “letting feel somenone good” versus “being good, even when it hurts”, by being more direct when it needs to be to get the message across. The content might be the same, but how it is taken by the listener and what he will do with it also depends on how it is presented.
I appreciate your comment that corrects the impression of the tone being the only or most important part, highlighting the content will mostly be the same. Just adding to it that the tone of the message also has an influence that is not to be underestimated.
It turns ChatGPT to an emotionless yet very on-point AI, so be aware it won’t pet your feelings in any way no matter what you write. I added the instructions to the original post above.
Sure, I added it to the original post above.
Well I’ve heard Cybertrucks are getting cheap because not many people want them.
Well, such a license could just obligat to open source the AI model that has been trained on it. If the instance prohibits training of AI models, or allow it, would be a separate condition that’s up to the instance owner, and its users can decide if they want to contribute under that condition, or not.
Goldman Sachs would not publish it that prominantly if it didn’t help their internal goals. And their intention is certainly not to help the public or their competitors. There are independent studies of some topics that are all well made and get to opposite conclusions. Invedtment firms just do what serves them. I wouldn’t trust anything that they publish.
There are studies that suggest that the information investment firms publish is not based on what they believe to be true, but on what they want others, including their competitors, believe to be true. And in many cases for serving their investment strategy, it benefits them to publish the opposite of what they believe to be true.
If Goldman Sachs said that, then most likely the opposite is true.
I’m surprised how everyone here believes what that capitalist company is saying, just because it fits their own narrative of AI being useless.
Why not banning them in schools, are they needed for studying?
It’s not helpful because it’s not discussing content but attacking a person’s character. This leads to emotions running high rather than letting your reasoning win the discussion.
There should be an option to say “I’ve read it and I decided against it” that makes the dot disappear.
For technical notes, I’d recommend Sphinx docs or single reStructuredText files on a cloud storage or repository. You can create all kind of formats (PDF, HMTL, ect.) from it, and it’s future proof.
Trac was great years ago. As much as I know, they were stuck on Python 2 until the very last moment 3 years ago, so it became almost unusable, and the UI is not responsive even today, not usable on phone. It used to be really great, but be careful relying on it before doing research on its current development.
You are right. I’ve updated the naming. Thanks for your feedback, very much appreciated.