Test code should count toward weight
This is a valuable point that I mostly agree with, but am only starting to think through the implications of.
Based on the metrics I’ve recorded on projects I’ve worked on, a well-tested project will commonly consist of 75% or 80% tests (by line code.) So if we did this, well-tested projects would be weighted four or five times larger than untested projects.
Although I am a huge fan of hardcore testing and TDD, to me, this seems unfair. A project might, in theory, use no tests because it has some other (hypothetical) means of maintaining good fit-for-use, internal design, maintainability and quality.
If the untested project were not as good as a well-tested project, then users of that project should be the ones to judge, by selecting among competitors.
In essence, I’m saying that if a poorly tested project, which will presumably therefore have less useful functionality, be harder to maintain, and have more bugs, will be weighted down automatically by being used by fewer subscribers. We don’t need to additionally weight it down by code size.
On the other hand, if an untested project manages to somehow still provide useful functionality, be maintainable and responsive to new requirements, and few bugs, then it deserves a full share, rather than being artificially penalized for the methods it used to get there.
(In practice, I think it’s unlikely an untested project would be able to do this. As an industry we haven’t found a practice that is as good as good tests for these purposes. But the above seems right to me in principle. Even though I’m a test zealot.)