forked from OSchip/llvm-project
1 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
Chandler Carruth | bf6a4e0b39 |
Add the 'googlemock' component of Google Test to LLVM's unittest libraries.
I have two immediate motivations for adding this: 1) It makes writing expectations in tests *dramatically* easier. A quick example that is a taste of what is possible: std::vector<int> v = ...; EXPECT_THAT(v, UnorderedElementsAre(1, 2, 3)); This checks that v contains '1', '2', and '3' in some order. There are a wealth of other helpful matchers like this. They tend to be highly generic and STL-friendly so they will in almost all cases work out of the box even on custom LLVM data structures. I actually find the matcher syntax substantially easier to read even for simple assertions: EXPECT_THAT(a, Eq(b)); EXPECT_THAT(b, Ne(c)); Both of these make it clear what is being *tested* and what is being *expected*. With `EXPECT_EQ` this is implicit (the LHS is expected, the RHS is tested) and often confusing. With `EXPECT_NE` it is just not clear. Even the failure error messages are superior with the matcher based expectations. 2) When testing any kind of generic code, you are continually defining dummy types with interfaces and then trying to check that the interfaces are manipulated in a particular way. This is actually what mocks are *good* for -- testing *interface interactions*. With generic code, there is often no "fake" or other object that can be used. For a concrete example of where this is currently causing significant pain, look at the pass manager unittests which are riddled with counters incremented when methods are called. All of these could be replaced with mocks. The result would be more effective at testing the code by having tighter constraints. It would be substantially more readable and maintainable when updating the code. And the error messages on failure would have substantially more information as mocks automatically record stack traces and other information *when the API is misused* instead of trying to diagnose it after the fact. I expect that #1 will be the overwhelming majority of the uses of gmock, but I think that is sufficient to justify having it. I would actually like to update the coding standards to encourage the use of matchers rather than any other form of `EXPECT_...` macros as they are IMO a strict superset in terms of functionality and readability. I think that #2 is relatively rarely useful, but there *are* cases where it is useful. Historically, I think misuse of actual mocking as described in #2 has led to resistance towards this framework. I am actually sympathetic to this -- mocking can easily be overused. However I think this is not a significant concern in LLVM. First and foremost, LLVM has very careful and rare exposure of abstract interfaces or dependency injection, which are the most prone to abuse with mocks. So there are few opportunities to abuse them. Second, a large fraction of LLVM's unittests are testing *generic code* where mocks actually make tremendous sense. And gmock is well suited to building interfaces that exercise generic libraries. Finally, I still think we should be willing to have testing utilities in tree even if they should be used rarely. We can use code review to help guide the usage here. For a longer and more complete discussion of this, see the llvm-dev thread here: http://lists.llvm.org/pipermail/llvm-dev/2017-January/108672.html The general consensus seems that this is a reasonable direction to start down, but that doesn't mean we should race ahead and use this everywhere. I have one test that is blocked on this to land and that was specifically used as an example. Before widespread adoption, I'm going to work up some (brief) guidelines as some of these facilities should be used sparingly and carefully. Differential Revision: https://reviews.llvm.org/D28156 llvm-svn: 291606 |