If the peer review are unable to differentiate between student output and AI output then they are either incompetent or they are inundated with absolute garbage. The latter also suggests the former is true.
I just finished marking student reports. There are some sections clearly written without AI, some that clearly are written by AI, and then some sections where the ideas are correct, the grammar is perfect, and it is on topic, but it doesn’t seem like it is written in the student’s voice. Could be AI, could be a friend editing, could be plagiarism, could be written long before or after the surrounding paragraphs. It is not always obvious, and the edge cases are the problem.
If the peer review are unable to differentiate between student output and AI output then they are either incompetent or they are inundated with absolute garbage. The latter also suggests the former is true.
I just finished marking student reports. There are some sections clearly written without AI, some that clearly are written by AI, and then some sections where the ideas are correct, the grammar is perfect, and it is on topic, but it doesn’t seem like it is written in the student’s voice. Could be AI, could be a friend editing, could be plagiarism, could be written long before or after the surrounding paragraphs. It is not always obvious, and the edge cases are the problem.