https://ai-2027.com does a solid job of demonstrating the existential risk of the singularity. If it is actually approaching, we need leaders who will give potential black swan events the severe caution they are due.
I sure hope the theoretical timeline is compressed because the singularity under Donald Trump likely means that we're all dead due to misalignment.
Time to ship, change failure, rework rates, mean time to resolve, code complexity, code churn, average age of dependencies - there's a ton of reliable metrics for technical debt, but they have to actually be looked at to do any good.
The problem is that technical debt is a more complex concept & thus requires more metrics to properly measure than a simple concept like velocity.
First, it's not "can occur" but does occur 100% of the time. Second, sure, it does mean something is missing, but how do you test for "this codebase can withstand at least two years of evolution"?
You can spend a lot of time perfecting the test suite to meet your specific requirements and needs, but I think that would take quite a while, and at that point, why not just write the code yourself? I think the most viable approach of today's AI is still to let it code and steer it when it makes a decision you don't like, as it goes along.
You have to fight to get agents to write tests in my experience. It can be done, but they don't. I've yet to figure out how get any any agent to use TDD - that is write a test and then verify it fails - once in a while I can get it to write one test that way, but it then writes far more code to make it pass than the test justifies and so is still missing coverage of important edge cases.
I have TDD flow working as a part of my tasks structuring and then task completion.
There are separate tasks for making the tests and for implementing. The agent which implements is told to pick up only the first available task, which will be “write tests task”, it reliably does so. I just needed to add how it should mark tests as skipped because it’s been conflicting with quality gates.
Yes, definitely. I also don't think every project is able to create a plugin platform. Sometimes you just have a lot of interconnected components, where they kind of influence each other.
What I was trying to say is that in future developments, as a developer, one of the extra questions on your mind should be: can we turn this into a platform with separate plugins? Because you know those plugins can be written fast, cheap, and don't require top notch engineering work.
But I think I get what you are saying: what you gain in plugin simplicity, you pay in effort to design the platform to support them.
I guess it will depend from project to project, and so the typical "it depends" applies :).
People have been thinking about that a long time though. For that objective, LLMs don't seem to open up any new capabilities. If that problem could be solved, with really clean abstractions that dramatically reduce context needed to understand one "module" at a time, sure LLMs will then be able to take that an run. But it's a fundamentally hard problem.
This sentence appears to be unfinished.
reply