Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, my own experience in my several companies confirms this.

Yes, code quality is important, but we write code to solve a problem for the paying customer. If we can't solve it on time and on budget it doesn't matter how well it has been written.

If you survive long enough you'll refactor the parts that are important.

Also some parts are more important than other, everything with money calculation and potential data loss should be written more carefully.



I've always called this "good problems to have." If you're at the point where your slapped together solution doesn't cut it anymore then it means you're successful enough to actually need better.

Don't use the solutions to hard problems when you don't have hard problems yet. Because they're making trade-offs to meet constraints that you're not under. Ranch dressing at the grocery store has to be shelf stable and they make a bunch of compromises to get it to that point. The ranch dressing you make it home can be better easily by just ignoring those constraints.


> potential data loss should be written more carefully.

Doesn't this apply to any code that touches data intended to eventually be persisted? If so, IMO this applies to a huge portion of all software, I would guess more than half, because writes tend to be much more complex than reads IME.


Data loss usually occurs when you "migrate", "backup/restore", "upgrade" data. A stupid internal tool can wreak havoc because it's something non-customer facing with less stringent testing.

The bugs on CRUD operations are usually ironed out early and can be limited to a small subset of data being lost. However a mingled migration script from one table to another is a really dangerous stuff but frequently it's treated as "internal tool".

Floating point calculation and storage is also tricky and should be written/tested with greater care.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: