I also love it. Finally, I am no longer constrained by syntax errors or forgotten API details. I can focus on the feature. It's like taking programming to a higher level - programming in English (instead of Java).
Same. As long as we are reading everything we're submitting upstream and working towards either cleaning up slop or cataloguing them as debt, it's fine.
How does your framework compare to spec-driven development e.g. https://github.com/github/spec-kit? In my experience, spec-kit produces a lot of markdown files and little source code.
Very similar, in particular the first phase is lot of markdown and no code too, but spec-kit is clearly more matrue and wide in features and support, while my scaffold is newborn and supports just Claude Code.
I feel that my scaffold is more adherent to old-style waterfall, for example it begins with the definition of the stakeholders, and take advantage of the less adopted practice to maintain assumptions and constraints, not just user stories and requirements.
A big difference is that I have introduced decisions, that are not just design decision, but also coding decisions: after the initial requirement elicitation phase whenever the agent needs to decide on approach or estabilish a pattern, that is crystallised in a decision artifact, and they are indexed in a way that future coding sessions will automatically inject the relevant decisions in their context.
Another difference is that when using the scaffold you can tell high level goals, and if the project is complex enough the design will propose a split in multiple components. Every component can be seen as a separtate codebase, with different stack and procedures. In this way you obtain a mono-repo, but with a shared requirement/design that helps a lot in the change management, because sometime changes will affect several components, and without the shared requirements and design it will be pretty hard to automate.
A very nice video. It shows that computer games are glamorous on the outside, but once you look behind the scenes, they just look like normal software. I was also surprised to hear that the team did not only rely on computer graphics textbook algorithms, but built their own pathfinding algorithm in a pragmatic manner.
Ok, impressive, but - why?
No current computer has a floppy disk drive anymore.
The Web Page claims building such a disk is a learning exercise, but the knowledge offered is pretty arcane, even for regular Linux users.
Is this pure nostalgia?
Well, in a world of finite resources, I think I would need a better reason to invest time into this topic than just "for the challenge". I mean I just think that I have ample opportunities to do something more sensible with my time.
Climbing a mountain at least gives you bragging rights; I don't think a bootable floppy disk is impressing anyone these days.
Lefties sympathizing with criminals, sharing their wealth distribution fantasies, agitating against competing political views.
You've come a long way, CCC!
The initial ideas was political, but with a clear focus on freedom of information, and the power to govern your own personal data.
In all fairness: human senior devs see AI-written source code with some disdain, as it usually does not match their stylistic and idiomatic preferences (although being correct and fully working).
I don't think that untested code is the problem here - you can easily measure test coverage and of course. every CI/CD pipeline should run the existing unit and integration tests.
I am certain that LLMs can help you with judgment calls as well. I spent the last month tinkering with spec-driven development of a new Web app and I must say, the LLM was very helpful in identifying design issues in my requirements document and actively suggested sensible improvements. I did not agree to all of them, but the conversation around high-level technical design decisions was very interesting and fruitful (e.g. cache use, architectural patterns, trade-offs between speed and higher level of abstraction).
reply