Static Analysis tests
Eric Snow
eric.snow at canonical.com
Thu Apr 28 14:41:13 UTC 2016
On Wed, Apr 27, 2016 at 9:14 PM, Nate Finch <nate.finch at canonical.com> wrote:
> So, I don't really think the method of testing should determine where a test
> lives or how it is run. I could test the exact same things with a more
> common unit test - check the tls config we use when dialing the API is using
> tls 1.2, that it only uses these specific ciphersuites, etc. In fact, we
> have some unit tests that do just that, to verify that SSL is disabled.
> However, then we'd need to remember to write those same tests for every
> place we make a tls.Config.
The distinction here is between testing the code (execution) and
verifying that the code base follows proscribed constraints, whether
technical or organizational. Perhaps you could call the latter
"meta-testing". :) For testing the code, it is really useful to have
the tests next to the code. For meta-tests, e.g. static-analysis
against the code base as a whole, the relationship with any particular
package is tenuous at best.
So I agree with the others that "tests" which focus on the code-base
as a whole, via static analysis or otherwise, should be grouped
together in the directory tree. There's prior art with the
featuretests package, though it's not quite the same thing.
I also agree that Lingo would be an effective tool in cases like this
where we are enforcing a code-base-wide policy, e.g. "don't import
testing packages in non-test files" or "use the helper function when
creating a tls.Config". However, that does not mean we can't wrap
calls to Lingo in test methods (more on this below).
>
> The thing I like about having this as part of the unit tests is that it's
> zero friction. They already gate landings. We can write them and run them
> them just like we write and run go tests 1000 times a day. They're not
> special. There's no other commands I need to remember to run, scripts I
> need to remember to set up. It's go test, end of story.
I agree that is useful, particularly if we can isolate them, e.g. with
"go test -short" or the testing tags that I proposed. I don't think
the objection is necessarily about running these "meta" tests via "go
test". Personally I kind of like that approach. It wouldn't be hard
to write test suites with methods which call out to the tools. That
would thereby allow us to meet all our testing needs through "go test"
while allowing us to continue using the other tools independently to
meet additional specific needs.
For me, the objection is more about *where* such tests live.
>
> The comment about Lingo is valid, though I think we have room for both in
> our processes. Lingo, in my mind, is more appropriate at review-time, which
> allows us to write lingo rules that may not have 100% confidence. They can
> be strong suggestions rather than gating rules. The type of test I wrote
> should be a gating rule - there are no false positives.
Again, having granular testing tags would allow us to make that
distinction in a way that the test tooling could easily distinguish.
>
> To give a little more context, I wrote the test as a suite, where you can
> add tests to hook into the code parsing, so we can trivially add more tests
> that use the full parsed code, while only incurring the 16.5 second parsing
> hit once for the entire suite. That doesn't really affect this discussion
> at all, but I figured people might appreciate that this could be extended
> for more than my one specific test.
Fair enough.
> I certainly wouldn't advocate people
> writing new 17 seconds tests all over the place.
Oh good! :)
-eric
More information about the Juju-dev
mailing list