-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Don't report name errors after an exception in refguide-check #13116
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we're going to agree here, so I'll let someone else decide.
If you are missing the words file, you can still run the remaining doctests without issues. For someone adding new docs elsewhere, blocking the doctest runs entirely isn't helpful.
You can run the doctests for the part of the code your are changing, a user is not crippled locally if we require full dependencies only when the full suite is initiated.
Doing something like an xfail might be useful for developers, but would have also made it harder to see that it wasn't being run in CI. This preserves the test failure, but minimises the log spam.
The xfail would allow the suite to run locally without hassle for developers/users who don't have the dependency installed. Since the core team maintains the CI, we can probably deal with the burden of making sure the full set of deps are installed for a single matrix entry, but it causes more friction for users/developers working locally to deal with an error for a part of the code they are not touching.
I'm also pretty mystified by the decision not to just vendor a small/medium size file with the string data needed for this tutorial--it is just a bunch of plain text words, and the tutorial itself even indicates how easy it is to find such a list on various websites:
Another easy source for words are the Scrabble word lists available at various sites around the internet (search with your favorite search engine).
There's also an advantage to failing fast and early vs. running the entire suite to find out it will fail, when you could have gotten that information earlier.
I can't help but thinking your two preferred options are opposite extremes. Either fail loud and fail fast, or completely ignore the failure. Surely failing gracefully with an error message and running the rest of the test suite is a viable middle ground here?
The tutorials are run as all-or-nothing so I don't think that's a useful work-around.
If CI breaks at some point down the line, then the issue would only be spotted if someone coincidentally notices it in the log Vs. CI failing as soon as it breaks. From the user's perspective it's only a single error message that's easy enough to ignore. To me, I don't see that as an issue. In fact I even have a |
I think we've generally adopted a philosophy that CI should be either red or green, not "orange." We don't usually encourage scenarios where a test suite fails but we know why it fails, so it actually passed. So in that sense, yes, I'm pushing for one of two opposite extremes--success or failure, an unambiguous feedback for both local development and for CI. Maybe vendoring the text file is the simplest solution if there is no license issue. |
This is definitely an improvement, because it makes the errors much less understandable - both the That said, the tutorial is set up in a slightly annoying fashion, if the file is small enough we should indeed vendor it and make the life of tutorial readers easier.
I agree, but I'm not sure that's related to this structural improvement. We can keep CI green in combination with this fix, either by vendoring or by just installing the optional dependency. |
Verified that this works better locally, so I'll merge this. |
I think if the change had been proposed as an incremental fix independent of a broader improvement rather than a superior alternative I may have been more inclined to agree. |
>>> res = func() # func raises an unexpected ValueError >>> res The second line raises a NameError because res is not defined: `func()` raised before the name `res` was bound. So, suppress these name errors after an unexpected exceptions. See the discussion in scipy/scipy#13116, where this first came in. Also note that here we are attaching the `had_unexpected_error` flag to the test, not the runner (which can possibly be reused for other tests).
>>> res = func() # func raises an unexpected ValueError >>> res The second line raises a NameError because res is not defined: `func()` raised before the name `res` was bound. So, suppress these name errors after an unexpected exceptions. See the discussion in scipy/scipy#13116, where this first came in. Also note that here we are attaching the `had_unexpected_error` flag to the test, not the runner (which can possibly be reused for other tests).
>>> res = func() # func raises an unexpected ValueError >>> res The second line raises a NameError because res is not defined: `func()` raised before the name `res` was bound. So, suppress these name errors after an unexpected exceptions. See the discussion in scipy/scipy#13116, where this first came in. Also note that here we are attaching the `had_unexpected_error` flag to the test, not the runner (which can possibly be reused for other tests).
What does this implement/fix?
Looking at gh-13111, the root cause of the CI failure was buried by
NameError
s from trying to run the rest of the tutorial without theword_list
variable defined. This suppressesNameErrors
that happen after another unexpected exception. That brings the root cause of the failure to the bottom of the error log, where it's most likely to be seen.Additional information
Comparison with the suggestions in #13113 (comment):