With ClassicPress using Semantic Versioning numbering for updates, I thought we could discuss ratings in a similar vein. Wordpress.org uses a misleading ratings system - to rate a plugin a user is asked to give it a rating of between 1 and 5 stars. This gives the misleading impression that a mediocre plugin with equal satisfaction to dissatisfaction would be rated 2.5 since that is half of 5, however since the minimum rating is 1 not 0 the mediocre score is actually 3.
We can see the results of this with Gutenberg. Its rating is displayed as 1.9 out of 5, however if people were instead asked to rate the plugin between 0 and 5 the rating would be closer to 1.2 out of 5. If asked to rate between 0 and 4 the rating would be closer to 0.9 out of 4 (it would be exactly this assuming everyone just dropped their rating by one star, but the subjective psychological effect of the pseudo 5-star system could be skewing results).
Part of the problem is in designing the feedback interface. It’s very easy, and visually pleasing, to just have empty stars and let users click them to indicate their rating - however such a system fails to incorporate the 0-star rating. Either 0-star ratings are not permitted at all (like wordpress.org), or it simply isn’t clear to the user that it is an option. A sensible way around such a problem is by using a drop-down menu instead, or ye olde radio buttons to select between options (example).