

I think you’re the first account I’ve seen from the sooner instance, hope you’re having fun over there at least
I think you’re the first account I’ve seen from the sooner instance, hope you’re having fun over there at least
Knowing this it seems like a very low quality study. They should probably redo this with multiple conditions.
If you make enough mistakes, speed is a detriment not a benefit. Increasing speed allows you to produce more summaries but if you still need to correct and edit them all you’ve done is add a step where a human has to still read the document to the level where they could summarize it and edit the AI summary. Therefore the bottleneck of a human reading the document and working on a summary is still there. It would only potentially make it slightly easier if the corrections needed are small and obvious.
Well it can be great at making text too, but the usecase has to be very good. Right now lots of companies in the B2B space are using LLMs as a middle layer to chat bots and navigation systems to enhance how they function. They are also being used to create unique lists and inputs for certain systems. However on the consumer side the usecase is pretty mixed with a lot of big companies just muddying their offerings instead of bringing any real value.
I’m surprised more user friendly distros don’t have this, especially more commercial ones