develwoutacause’s avatardevelwoutacause’s Twitter Archive—№ 678

  1. #Idea for a metric to measure test usefulness: The rate at which a test fails when run on "non-submitted" code vs the rate it fails on submitted code. This provides a window into how many often a dev broke the test, found a mistake, and fixed it, *before* submitting the code.
    1. …in reply to @develwoutacause
      You'd have to account for flaky and brittle tests somehow which could skew these results, and I'm not sure how you strictly identify "non-submitted code" (maybe non-CI builds + PR CI builds?) I wonder if there's anyone out there already measuring something similar to this?