You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the topics on the definition of Baseline (numbers of versions supported, ...) we use percentages of support a lot.
Everyone seems to agree that "more than 90%" is good and that "less than 90%" is questionable.
But this doesn't seem based on reality or research, more a intuition based on past experience.
I think it could be useful to "calibrate" our intuition around these numbers.
I don't have an immediate suggestion for how this can be done.
We first need to agree on a dataset.
Without all looking at the same data we might all be talking about a different 90% :)
If we do find a dataset it might be interesting to test our intuition.
Maybe 80% is perfectly fine?
Maybe 99% really is required?
The text was updated successfully, but these errors were encountered:
romainmenke
changed the title
"Calibrate" percentage of support / uptake of features
"Calibrate" intuition around percentage of support / uptake of features
May 25, 2023
Yes, usually we talk about something higher than 90% and lower than 98% as the threshold and that almost always refers to caniuse.com stats, even if that's implicit.
I can think of multiple ways to go about testing that, with any given data set:
Check multiple features that are on a threshold boundary and survey developers how they feel about them
Check usage counters for those multiple features that have crossed the threshold boundary and see if that leads to higher usage
Expert review: check multiple features on a threshold boundary and categorize them accordingly
In the topics on the definition of Baseline (numbers of versions supported, ...) we use percentages of support a lot.
Everyone seems to agree that "more than
90%
" is good and that "less than90%
" is questionable.But this doesn't seem based on reality or research, more a intuition based on past experience.
I think it could be useful to "calibrate" our intuition around these numbers.
I don't have an immediate suggestion for how this can be done.
This is directly related to #190
We first need to agree on a dataset.
Without all looking at the same data we might all be talking about a different
90%
:)If we do find a dataset it might be interesting to test our intuition.
Maybe
80%
is perfectly fine?Maybe
99%
really is required?The text was updated successfully, but these errors were encountered: