Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Don't report name errors after an exception in refguide-check #13116

Merged
merged 1 commit into from
Nov 25, 2020

Conversation

peterbell10
Copy link
Member

What does this implement/fix?

Looking at gh-13111, the root cause of the CI failure was buried by NameErrors from trying to run the rest of the tutorial without the word_list variable defined. This suppresses NameErrors that happen after another unexpected exception. That brings the root cause of the failure to the bottom of the error log, where it's most likely to be seen.

Additional information

Comparison with the suggestions in #13113 (comment):

  • If you are missing the words file, you can still run the remaining doctests without issues. For someone adding new docs elsewhere, blocking the doctest runs entirely isn't helpful.
  • Doing something like an xfail might be useful for developers, but would have also made it harder to see that it wasn't being run in CI. This preserves the test failure, but minimises the log spam.
  • This is a general solution that makes the error log more readable for all failures, not just this specific failure.

@peterbell10 peterbell10 added the CI Items related to the CI tools such as CircleCI, GitHub Actions or Azure label Nov 22, 2020
Copy link
Contributor

@tylerjereddy tylerjereddy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we're going to agree here, so I'll let someone else decide.

If you are missing the words file, you can still run the remaining doctests without issues. For someone adding new docs elsewhere, blocking the doctest runs entirely isn't helpful.

You can run the doctests for the part of the code your are changing, a user is not crippled locally if we require full dependencies only when the full suite is initiated.

Doing something like an xfail might be useful for developers, but would have also made it harder to see that it wasn't being run in CI. This preserves the test failure, but minimises the log spam.

The xfail would allow the suite to run locally without hassle for developers/users who don't have the dependency installed. Since the core team maintains the CI, we can probably deal with the burden of making sure the full set of deps are installed for a single matrix entry, but it causes more friction for users/developers working locally to deal with an error for a part of the code they are not touching.

I'm also pretty mystified by the decision not to just vendor a small/medium size file with the string data needed for this tutorial--it is just a bunch of plain text words, and the tutorial itself even indicates how easy it is to find such a list on various websites:

Another easy source for words are the Scrabble word lists available at various sites around the internet (search with your favorite search engine).

There's also an advantage to failing fast and early vs. running the entire suite to find out it will fail, when you could have gotten that information earlier.

@peterbell10
Copy link
Member Author

I can't help but thinking your two preferred options are opposite extremes. Either fail loud and fail fast, or completely ignore the failure. Surely failing gracefully with an error message and running the rest of the test suite is a viable middle ground here?

You can run the doctests for the part of the code your are changing, a user is not crippled locally if we require full dependencies only when the full suite is initiated.

The tutorials are run as all-or-nothing so I don't think that's a useful work-around.

Since the core team maintains the CI, we can probably deal with the burden of making sure the full set of deps are installed for a single matrix entry. it causes more friction for users/developers working locally to deal with an error for a part of the code they are not touching.

If CI breaks at some point down the line, then the issue would only be spotted if someone coincidentally notices it in the log Vs. CI failing as soon as it breaks.

From the user's perspective it's only a single error message that's easy enough to ignore. To me, I don't see that as an issue. In fact I even have a scipy.optimize doctest failing on master locally and it hasn't been causing me any issues.

@tylerjereddy
Copy link
Contributor

From the user's perspective it's only a single error message that's easy enough to ignore.

I think we've generally adopted a philosophy that CI should be either red or green, not "orange." We don't usually encourage scenarios where a test suite fails but we know why it fails, so it actually passed. So in that sense, yes, I'm pushing for one of two opposite extremes--success or failure, an unambiguous feedback for both local development and for CI. Maybe vendoring the text file is the simplest solution if there is no license issue.

@rgommers
Copy link
Member

This is definitely an improvement, because it makes the errors much less understandable - both the word_list one, which I had encountered before locally and not understood as a missing optional dependency, and other future errors. So this should go in.

That said, the tutorial is set up in a slightly annoying fashion, if the file is small enough we should indeed vendor it and make the life of tutorial readers easier.

I think we've generally adopted a philosophy that CI should be either red or green, not "orange."

I agree, but I'm not sure that's related to this structural improvement. We can keep CI green in combination with this fix, either by vendoring or by just installing the optional dependency.

@rgommers rgommers added this to the 1.6.0 milestone Nov 25, 2020
@rgommers
Copy link
Member

Verified that this works better locally, so I'll merge this.

@rgommers rgommers merged commit d975f64 into scipy:master Nov 25, 2020
@peterbell10 peterbell10 deleted the refguide-nameerrors branch November 25, 2020 14:03
@tylerjereddy
Copy link
Contributor

I agree, but I'm not sure that's related to this structural improvement. We can keep CI green in combination with this fix, either by vendoring or by just installing the optional dependency.

I think if the change had been proposed as an incremental fix independent of a broader improvement rather than a superior alternative I may have been more inclined to agree.

ev-br added a commit to scipy/scipy_doctest that referenced this pull request Jun 4, 2022
>>> res = func()   # func raises an unexpected ValueError
>>> res

The second line raises a NameError because res is not defined:
`func()` raised before the name `res` was bound.

So, suppress these name errors after an unexpected exceptions.
See the discussion in  scipy/scipy#13116,
where this first came in. Also note that here we are attaching the
`had_unexpected_error` flag to the test, not the runner (which can
possibly be reused for other tests).
ev-br added a commit to scipy/scipy_doctest that referenced this pull request Jun 4, 2022
>>> res = func()   # func raises an unexpected ValueError
>>> res

The second line raises a NameError because res is not defined:
`func()` raised before the name `res` was bound.

So, suppress these name errors after an unexpected exceptions.
See the discussion in  scipy/scipy#13116,
where this first came in. Also note that here we are attaching the
`had_unexpected_error` flag to the test, not the runner (which can
possibly be reused for other tests).
cr313 added a commit to cr313/doctest-scipy that referenced this pull request Apr 19, 2024
>>> res = func()   # func raises an unexpected ValueError
>>> res

The second line raises a NameError because res is not defined:
`func()` raised before the name `res` was bound.

So, suppress these name errors after an unexpected exceptions.
See the discussion in  scipy/scipy#13116,
where this first came in. Also note that here we are attaching the
`had_unexpected_error` flag to the test, not the runner (which can
possibly be reused for other tests).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Items related to the CI tools such as CircleCI, GitHub Actions or Azure
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants