Link to the article without the paywall
When I walk around in my uni people openly talk about using chatgpt to pass their classes. When I ask for help on some lecture groupchat first 4 answers are “I just used chatgpt.”
They gave me a whole speech about how they take academic dishonesty so seriously at the beginning but I am honestly just disappointed now. Even using solution manuals make you considered a “good student”
it has gotten so bad? no wonder my state uni have been complaining review sites about the schools lack of direction for its direction.
Once these AI companies go belly-up, those people with critical thinking and research skills will be able to name their price.
Those abilities have been in high demand for millenia. Focus on the basics.
Probably not going to go belly-up in a while, but the enshittification cycle still applies. At the moment, investors are pouring billions into the AI business, and as a result, companies can offer services for free while only gently nudging users towards the paid tiers.
When the interest rates rise during the next recession, investors won’t have access to money any more. Then, the previously constant stream of money dries up, AI companies start cutting what the free tier has, and people start complaining about enshittification. During that period, the paid tiers also get restructured to squeeze more money out of the paying customers. That hasn’t happened yet, but eventually it will. Just keep an eye on those interest rates.
Probably not going to go belly-up, in a while
Don’t be so sure about that, the numbers look incredibly bad for them in terms of money burned per actual revenue, never mind profit. They can’t even pay for the inference alone (never mind training, staff, rent,…) from the subscriptions.
As long as they can convince investors of potential future revenue, they will be just fine. In the growth stage, companies don’t have to be profitable because the investors will cover the expenses. Being profitable becomes a high priority only when you run out of series F money, and the next investors can’t borrow another 700 million. It’s a combination of having low interest rates and convincing arguments.
BTW I don’t think this is a good way to run a company, but many founders and investors clearly disagree with me.
The difference between AI companies and most other tech companies is that AI companies have significant expenses that scale with the number of customers.
That’s a very good point. Actually, video hosting services also suffer from a similar problem, and that’s one of the main reasons why it’s so hard to compete with YouTube. Since there are so many LLM services out there at the moment, it makes me think that there must be a completely ridiculous amount of investor money floating around there. Doesn’t sound like a sustainable situation to me.
Apparently, the companies are hoping that everyone gets so hooked on LLMs that they have no choice but to pay up when the inevitable tsunami of enshittification hits us.
There are some numbers in this blog post https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/ (and a couple of others on the same blog) and they really don’t look like OpenAI is going to last a couple of years until profitability.
Wow, those are some pretty big numbers! About 10x bigger than what I was thinking. I knew these things can get pretty weird, but this is just absolutely wild. When expectations fly that high, the crash can be all the more spectacular.
When you notice that your free account can’t do much, that’s a sign that OpenAI is beginning to run out of money. When that happens, the competitors will be ready to welcome all the users who didn’t feel like paying OpenAI.
Thank fuck I graduated college decades ago…
Actual education/teaching is under assault in the US from all sides these days… Not certain today’s students have any chance. 🙄 🤦♀️ 🖕 💩
i graduated just before the fuckery, and they were at the forefront of using software/weeding out software for jobs already. when i was HS, students already had given up on doing homework, and they were passing people with failing/D grades to graduation. eveyrone that graduated during the pandemic, or is taking classes, said my old college is pretty bad now. because most of them elected to be online classes. theres also other prevailing issues that never were solved when i was still in college. them USING CHATGPT, is probably a step up from just copy and pasting content from various sites with the same exact question or essay.
it is not just US… the VSE - Prague’s university of economics and business has decided to abandon graduation theses, because it is supposedly “impossible to verify” whether they were written by student or AI, and replaced them with “hands-on” graduation project)
“hands-on” graduation project
Does hand-on mean supervised?
no, it means they will try to somehow apply the knowledge they acquired to real life problem. creating a project, instead of writing a text.
well how would you verify wether a thesis was written by AI? Mind that accusations are a serious matter, so “i guess it sorta looks like AI” or a percent number spat out by some unreliable LLM-detection AI isnt going to cut it
well, not sure if it works the same everywhere in the world, but here, you first write the graduation thesis and then you have to publicly defend it.
if the defense committee (or is it an attack committee, since it is the student who is on defense? :D) can’t ask questions in a way to find out whether the student actually wrote the paper and understands the topic, then what fucking pseudo-scientific field is that? (and the answer indeed is - it is economics 😂)
Then the student could just ask the AI to simulate a thesis defense and learn answers to the most likely questions.
The funny thing is, they would actually learn the material this way, through a kind of osmosis. I remember writing cheat sheets in college and finding I didn’t need it by the end.
So there are potential use cases, but not if the university doesn’t acknowledge it and continues asking for work that can be simply automated.
Then the student could just ask the AI to simulate a thesis defense and learn answers to the most likely questions.
while my opinion of economy as a field is not very high, i still have high enough opinion of any teacher to believe they do outperform shitty ai…
from what ive seen on some reddit posts, they USE AI to accuse the student of writing in AI.
Jesus how bad are their student papers that they can’t tell whether an AI wrote one?!
more like “how bad is the education”?
Well that’s a much better question.
Students are now prompting the AI to make it sound like a student wrote it, or putting it through an AI detector and changing the parts that are detected as being written by AI (adding typos or weird grammar, say). Even kids who write their own papers have to do the latter sometimes.
Perfect grammar and slightly unusual words in a paragraph. Could be a weird formulation from a student’s mind, could be AI. No way to really know.
If the peer review are unable to differentiate between student output and AI output then they are either incompetent or they are inundated with absolute garbage. The latter also suggests the former is true.
I just finished marking student reports. There are some sections clearly written without AI, some that clearly are written by AI, and then some sections where the ideas are correct, the grammar is perfect, and it is on topic, but it doesn’t seem like it is written in the student’s voice. Could be AI, could be a friend editing, could be plagiarism, could be written long before or after the surrounding paragraphs. It is not always obvious, and the edge cases are the problem.
the professors are using AI to sniff out AI.
Yeah given the quality of AI outputs they could just read the papers to spot it … you know … do their jobs? I mean there’s a few layers here for thesis review, the supervisor, the professor, the other peer reviewers. They are all supposed to review the paper and at least some of the data that led to its production.
There are people who want to do their own thinking and those who don’t. The ratio hasn’t changed much over time. Only the possibilities.
It’s fine… I’m sure we’ll have UBI anytime now. We’ll have AI working on it.
“Hi, what version of chatgpt did you use in your surgical training? Great!”
“You say the engineering team that designed this suspended walkway just used chatgpt during their training? Sounds good!”
I’m going to throw up.
“You are going to pay me to think.”
Nah, thank you, I will keep doing that my way…
I kinda dont like ChatGPT because it trains everything you feed it into it (goes the same with any LLM)
and hopefully it wont be forced Right??? (Probably)I’m not supporting higher education becoming reliant on for-profit companies like this, but AI tutors and the like, if properly implemented, would be kinda awesome. For example, it’s usually not feasible to have real life staff on hand to answer student questions at all hours of the day. Especially at the more early years of university, where content is simpler, AI is more than capable of meeting needs like this.
I don’t fully agree with most of the people on this thread. I also hate AI slop being forced into what feels like all aspects of our life right now, but LLMs do have some genuine uses.
Yeah man for profit companies should be banned in higher ed also unrelated did you renew the license for your textbook?
🤔
Just ask ChatGPT what this means, it will explain it to you like you’re five.