Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have two answers:

1. Yes. To the same extent that we are all bad people by definition, made of base material and unworthy urges.

I'd love to have better programmers show me how I can make my tests better. The code is out there.

2. Even if I have good tests "by definition", a radical rewrite might make old tests look like "assert(2x1 == 2), assert (2x2 == 4)". Tests exist in a context, and radically changing the context can change the tests you need.

---

This is not in OP, but I do also have a problem of brittle tests in my editor. In this case I need to test a word-wrapping algorithm. This depends intimately on pixel-precise details of the font. I'd love for better programmers than me to suggest how I can write tests that are robust and also self-evidently correct without magic constants that don't communicate anything to the reader. "Failure: 'x' started at x=67 rather than x=68." Reader's thought: "Why is this a problem?" etc. Comments appreciated on https://git.sr.ht/~akkartik/lines.love/tree/main/item/text_t.... The summary at https://git.sr.ht/~akkartik/lines.love/tree/main/item/text_t... might help orient readers.



>> If your tests break or go away when your implementation changes, aren’t those bad tests by definition?

> 1. Yes. To the same extent that we are all bad people by definition, made of base material and unworthy urges.

Good and bad are forms of judgement, so let's eschew judgement for the purposes of this reply :-).

> I'd love to have better programmers show me how I can make my tests better.

Better is also a form of judgement and, so, I will not claim I am or am not. What I will claim to do is offer my perspective regarding:

> This is not in OP, but I do also have a problem of brittle tests in my editor.

Unfortunately, brittle tests are the result of being overly specific. This is usually due to tests enforcing implementation knowledge instead of verifying a usage contract. The example assertions above are good examples of this (consider "assert (canMultiply ...)" as a conceptual alternative). What helps mitigate this situation is use of key abstractions relevant to the problem domain along with insulating implementation logic (note that this is not the same as encapsulation, as insulation makes the implementation opaque to collaborators).

In your post, you posit:

> Types, abstractions, tests, versions, state machines, immutability, formal analysis, all these are tools available to us in unfamiliar terrain.

I suggest they serve a purpose beyond when "in unfamiliar terrain." Specifically, these tools provide confidence in system correctness in the presence of change. They also allow people to reason about the nature of a system, including your future-self.

Perhaps most relevant to "brittle tests" are the first two you enumerated - types and abstractions. Having them can allow test suites to be defined against the public contract they provide. And as you rightly point out in your post, having the wrong ones can lead to problems.

The trick is, when incorrect types and/or abstractions are identified, this presents an opportunity to refine understanding of the problem domain and improve key abstractions/collaborations accordingly. Functional testing[0] is really handy to do this fairly rapidly when employed early and often.

HTH

0 - https://en.wikipedia.org/wiki/Functional_testing




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: