I am friends with a solo maintainer of a major open source project.
He repeatedly complains that at the beginning of some semester, he sees a huge spike of false/unproveable security weakness reports / GutHub issues in the project. He thinks that there is a Chinese university which encourages their students to find and report software vulns as part of their coursework. They don’t seem to verify what they describe is an actual security vuln or that the issue exists in his GitHub repo. He is very diligent and patient and tries to verify the issue is not reproducible, but this costs him valuable time and very scarce attention.
He also struggles because the upstream branch has diverged from what the major Linux distribution systems have forked/pulled. Sometimes the security vulns are the Linux distro package default configurations of his app, not the upstream default configurations.
And also, I’m part of the Kryptos K4 SubReddit. In the past ~6 months, the majority of posts saying “I SOLVED IT!!!1!” Are LLM copypasta (using LLM to try to solve it soup-to-nuts, not to do research, ideate, etc). It got so bad that the SubReddit will ban users on first LLM slop post.
I worry that the fears teachers had of students using AI to submit homework has bled over into all aspects of work.
As a human being I really enjoy knowing things and being challenged to grow.
While crypto style AI hype man can claim Claude is the best thing since sliced bread the output of such systems is brittle and confidently wrong.
We may have to ride out the storm, to continue investing in self learning as big tech cannot truly spend 1.5 trillion on the AI investment in 2025 without a world changing return on revenue, a one billion revenue last year from
OpenAI is nothing.
Kryptos K4 seems to me like a potential candidate for AI systems to solve if they're capable of actual innovation. So far I find LLMs to be useful tools if carefully guided, but more like an IDE's refactoring feature on steroids than an actual thinking system.
LLMs know (as in have training data) everything about Kryptos. The first three messages, how they were solved including failed attempts, years of Usenet / forum messages and papers about K4, the official clues, it knows about the World Clock in Berlin, including things published in German, it can certainly write Python scripts that would replicate any viable pen-and-paper technique in milliseconds, and so on.
Yet as far as I know (though I don't actively follow K4 work), LLMs haven't produced any ideas or code useful to solving K4, let alone a solution.
Yeah, you would suspect that the individual elements of solving K4 exist in some LLM, but so far the LLM slop answers are just very confident and very wrong.
My biggest complaint is that the users aren’t skeptical. They don’t even ask the LLM to verify if the answer it just generated matches the known hints from the puzzle artist. Beyond that, they don’t ask it to verify whether the decryption method actually yields the plaintext it confidently spit out.
I’m super impressed with Claude Code, though. For my use case, planning and building iOS app prototypes, it is amazing.
The typical graduation-requirement paper doesn't get published in a professional journal, so I think professional journals do provide significant curation.
Medical? What's the point? I'm happy with 98% of doctors being able to handle known conditions and only the few percent that are really interested to do research.
It makes the university look better if they do a lot of 'research' even if it's fake. There's not a real reason a doctor needs to do research for an MD.
He repeatedly complains that at the beginning of some semester, he sees a huge spike of false/unproveable security weakness reports / GutHub issues in the project. He thinks that there is a Chinese university which encourages their students to find and report software vulns as part of their coursework. They don’t seem to verify what they describe is an actual security vuln or that the issue exists in his GitHub repo. He is very diligent and patient and tries to verify the issue is not reproducible, but this costs him valuable time and very scarce attention.
He also struggles because the upstream branch has diverged from what the major Linux distribution systems have forked/pulled. Sometimes the security vulns are the Linux distro package default configurations of his app, not the upstream default configurations.
And also, I’m part of the Kryptos K4 SubReddit. In the past ~6 months, the majority of posts saying “I SOLVED IT!!!1!” Are LLM copypasta (using LLM to try to solve it soup-to-nuts, not to do research, ideate, etc). It got so bad that the SubReddit will ban users on first LLM slop post.
I worry that the fears teachers had of students using AI to submit homework has bled over into all aspects of work.