Tags: #project
When choosing a software tool you should compare the value of each individual tool that is under consideration.
We typically do this by creating a comparative table matrix.
A comparison matrix table should take into consideration the following properties:
† stability: we can’t really judge the quality of issues raised, nor can we suggest that more issues is an indicator of poorly written software (e.g. a more popular tool is likely to have a larger number of issues raised compared to a tool with very few users).
But we can at least compare the number of issues that have been closed vs those that are left open and stale (e.g. a code base may be very active and still have poor user engagement by ignoring opened issues).
Although again, this is flawed because a code base with a much higher number of users (and thus a much higher number of issues raised) may not be able to review issues as quickly as a code base with very few users.
We will use the following emojis to identify comparative outcomes:
Note: I will sometimes mark both tools as neither ‘winner’ or ‘loser’ due to the comparative value not necessarily being indicative of either. For example, the data point might just be ‘of interest’ (such as the date of when the project started).
Tool A | Tool B | |
---|---|---|
Property A | ✅ | ❌ |
Property B | ⚖️ | ⚖️ |
Property C | 🗑 | 🗑 |