Interest check: Boost.Mock

Hi all, Over the past three-quarters of a year I've been working (with three friends) on a mocking framework for C++. In our vision, it would be a good complement to Boost.Test for testing classes separate from interfaces required - it would improve the ability to unit-test the classes that tie others together and that implement higher-level algorithms. It does this by creating an object that is "derived" from a given class at runtime, and replacing the functions with functions that redirect to verifying logic. The verifying logic uses basic structures to record expectations and to verify they happened. These ways of creating an object at runtime and modifying it are highly compiler dependant but remarkably portable - as it runs on at least 3 different compiler series (GCC, MSVC, EDG-based) and 5 platforms (Windows XP, Windows CE, Linux, QNX, bare platform), with few modifications. The library has been released under the name "Hippo Mocks" already. I'm interested in submitting it to Boost for three reasons. First of all, I believe that there may be obvious or very interesting optimizations or expansions I do not notice. Second, as Christian has elaborated on already, having the Boost tag on the project makes it more noticed and more used. As it has taken quite a bit of time from me already I would like people to use it. Third, Boost.Test allows testing code but does not support mocking. In the environment of unit testing, anything that is not a base library benefits from mocking by abstracting away details that are irrelevant and that will break your test unnecessarily. For Boost the main changes would be superficial in the naming and in using more default libraries instead of a new implementation. For Hippo Mocks I've decided to make no assumptions in anything over C++98 so there's a full Tuple implementation and so on. If there's an interest I'll spend the evening porting the code to qualify for the Boost coding norms and upload it to the vault (if I can - but that should be OK). I would like you to take a look at the code and tell me what you think of it. Thanks for your time. Kind regards, Peter Bindels

Hello John, others, 2009/6/10 John Maddock <john@johnmaddock.co.uk>
Are there docs for Hippo Mocks we can all look at?
Of course, I forgot to add them. There is a tutorial for using it at: http://www.assembla.com/wiki/show/hippomocks/Tutorial_3_0 There is a preliminary (and incomplete) list of supported compilers and systems at: http://www.assembla.com/wiki/show/hippomocks/SupportedCompilerOrProcessorCom... I've recently tested it with the VS2010 compiler and it compiles and works fine without any changes. I want to test it soon with the c++0x lambdas to see if they also work. To make using it in a project easier I've put all in a single header file which is about 2500 lines of code. There is much repetition and duplication that could be removed using preprocessor magic, C++0x variadic templates or using c++0x/Boost libraries but for the sake of simple integration I haven't so far. The one that I would integrate into boost would not have any of these objections so the header would shrink considerably. I do have a preliminary C++0x version for GCC 4.3+. The code can be found at: http://svn.assembla.com/svn/hippomocks/trunk/HippoMocks/hippomocks.h The C++0x version of the code (which is about half the amount of lines of code and much clearer) can be found at: http://svn.assembla.com/svn/hippomocks/trunk/HippoMocks0x/hippomocks.h Kind regards, Peter Bindels

on Wed Jun 10 2009, Peter Bindels <dascandy-AT-gmail.com> wrote:
Hi Peter, This looks like an interesting piece of work. It would however need much more complete documentation in order to become a Boost library, and would probably also need more complete documentation in order to fare well in a review. I encourage you to have it reviewed and do what's required to have it accepted! Cheers, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Peter Bindels wrote:
Can you please give specific usage examples you have in mind.
It does this by creating an object that is "derived" from a given class at runtime, and replacing the functions with functions that redirect to
What if I want to mock some concepts instead of interfaces?
Can you show an example of how much effort is required mock something up?
Boost.Test does have some support for interaction based testing already. Including class mock_object <boost/test/mock_object.hpp>. That said I'd be happy to offload this part and support your efforts. I'd like to know though: 1. How does your solution compares with what I have in this header? 2. How you solution compares to google mock library? 3. Did you see BoostCon presentation about mocks? How does your solution compares? 4. Will your solution support interaction-based testing facilities inside the Boost.Test (exception safety testing, tests for logged interaction expectations)?
Why not use boost?
There is definite interest on my part to move further interaction based testing support and full featured mocks. Please upload your code and I can try to comment on it when I have time. Gennadiy

Hi Gennadiy, 2009/6/11 Gennadiy Rozental <rogeeff@gmail.com>
Can you please give specific usage examples you have in mind.
Mocking with the least amount of boilerplate code with the most clear error messages and the most usability. In general, applications consist of piles of classes with interactions, tied together with design patterns and architectural patterns and constraints. In the context of unit testing these classes are tied together too much so that testing them involves testing the underlying classes. If they are fragile (due to their nature or implementation) the upper tests are fragile as well. A project I recently joined at work has about a thousand test cases, about 200 of which break if something in the lower layers falls over. Tracing to the problem becomes hard at the minimal. Mocking out the bottom end classes resolves this as only the interface of the dependencies is tested, not the implementation. Creating tests with mocks using macros is tedious, using external programs is limited. Using this method they're versatile and quick. I must admit I'm a slight bit confused as to what kind of answer you're expecting.
I haven't spent any time on mocking concepts so far but I suspect it to be possible. The last time I used ConceptGCC it had a start-up time of 30 seconds and I haven't checked since. That was over a year and a half ago, so I suspect it's evolved since then. I need to check with it to see how to make it work. Can you show an example of how much effort is required mock something up? class IFoo { public: virtual int getLength(std::string what) = 0; }; int test_function() { MockRepository mocks; IFoo *foo = mocks.InterfaceMock<IFoo>(); mocks.ExpectCall(foo, IFoo::getLength).With("Hello World").Return(42); std::cout << foo->getLength("Hello World") << std::endl; } This is close to the lower limit for using a mock at all. The first line of the test can be put into the testing framework making the Boost.Test does have some support for interaction based testing already.
I've spent a bit of time searching the Boost mailing list and the Boost.Test docs beforehand to see if something like this was already implemented, but I failed to see it. I think it's completely absent in the docs. As far as I can tell, mock_object tests afterwards. I must admit that in the hour I took to look at it, I suspect I haven't quite figured out what it does. The three test cases I found that uses it (one which logs output and two that test exception safety) didn't seem to adequately explain the complexity of the code.
1. How does your solution compares with what I have in this header?
I think it is somewhat comparable, although less desirable for a developer of end-code. For testing within Boost it may well have an upper hand, being more general. Most people, however, do not develop Boost. Your mock_object requires inheriting from it and implementing all the functions using a default implementation, which increases the amount of code and reduces maintainability for the test code. It does work in the case of multiple inheritance and is easier to port to other compilers and platforms. My solution requires setting up expectations beforehand and it checks them with the fidelity that you choose. It requires you to put in your test what lower-level functions you expect it to call and what ordering relations between them you expect. It makes the actual creation of mock object classes implicit - nobody ever creates a full class that a compiler sees. This significantly reduces the chance of typos. It does require you to specify all functions that will be tested because otherwise they'll have no implementation. There are a few drawbacks, mainly in the category of multiple inheritance (currently MI doesn't work) and awkward compilers (at least the EDG-based GreenHills compiler has an anomaly in its virtual function tables that makes a single test case not work). 2. How you solution compares to google mock library? Google uses more macros and has much more code to do comparable things. Also, like most other libraries, Google's mocking library requires creating a mock class for each interface to be mocked and thereby requires you to mock the full interface each time. That creates a larger barrier to refactoring and code improvement as you need to adjust more code that isn't related to the change. Google mock does support multiple inheritance and has a default behaviour of ignoring function calls that some people will prefer over my default behavior of throwing an exception. Google Mock has more code and more complexity in setting it up, with more overhead in keeping it working on changes and more code to be created by the user for minimal advantages.
3. Did you see BoostCon presentation about mocks? How does your solution compares?
Sadly, I was unable to attend BoostCon. I've read his presentation but it is a bit hard to follow without his explanation, and I can't find the video of the presentation. I cannot find any release of BMock other than a request to email him for using it, which at least seems counter to the Boost license and target as I think it is formulated. 4. Will your solution support interaction-based testing facilities inside
the Boost.Test (exception safety testing, tests for logged interaction expectations)?
Testing a log that has been recorded is mostly identical to telling the log beforehand and testing based on that. The minor advantage is slightly less setup work, which likely translates in not having to think about the interaction of a test. Adjusting the code to include a default implementation (such as the one for recording a log) is very little work, in the order of minutes. The main downside is that functions that do not have a default implementation or return value, but by default throw an exception. Before they throw an exception, any code can be included but it does not know which function it is that is being called.
So far, to include mocking functionality, no paths have to be set up, no software installed, no configuration done, no prerequisites installed. Requiring boost would be a major step up from that. If it were to be integrated into boost, this of course falls flat. That would make for a lot more generic code in the implementation. There is definite interest on my part to move further interaction based
testing support and full featured mocks.
The mocks I currently support are much more like the mocks in C# (using Rhino.Mocks) and Java (JMock) but well-adjusted to regular C++ use (in the same way that RhinoMocks uses C# reflection, I use C++ argument matching and templates to give as much information about errors and to allow the user to type (and maintain) as little as possible. I'm definately interested in expanding interaction testing and mocking into a direction somebody is interested in but I myself see no more need. I'm interested in your reaction. Please upload your code and I can try to comment on it when I have time.
The most direct link to the code is at http://www.assembla.com/spaces/hippomocks/documents/c3poual4Or3Q9ueJe5afGb/d.... I'm getting married in three days somewhat limiting my time for adjusting it to Boost. I'll try to cook something up soon. I'm afraid this mail took a bit longer to type and compose than I expected. Some bits of logic may be missing, please ask if you can't follow a part of it. Kind regards and thanks for your interest, Peter Bindels

Peter Bindels wrote:
[...]
I must admit I'm a slight bit confused as to what kind of answer you're expecting.
Actually, I wanted to see how is it gonna look in conjunction with Boost.Test as you mention in paragraph you skipped.
Actually my question had nothing to do with Concepts from next C++ standard. What I wanted to know is if you library will be able to mock classes being used to test function/method template in which case there is no base class at all and just specific concept (collection of methods and typedefs) is expected.
Interesting. I gave perfunctory look on your docs and code and aside from various implementation concerns (define ExpectCall is bad, not good at all and I believe you don't actually support pure virtual functions) I am under impression that you trying to hack into compiler implementation of virtual functions (and maybe something else). You code is not required to work according to standard, right? If this is the case it might be a tough sell (at least on my side), though my even bigger concern (explained below) is overall approach to interactions testing.
Yes. I never got to actually writing docs for this functionality.
Not sure what part you find complex. mock_object.hpp for the most part is just definition of simple class that mocks most generic functions existing in C++ (like constructors, assignments, various operators etc).
This functionality has nothing to do with testing boost.
I looked into you your docs/code and I must say I disagree with most of the above points (from the prospective of what is advantage and what is disadvantage). Originally, when I started to work on interaction based testing support in the Boost.Test I looked around and found two predominant approaches: 1. Expectations explicitly specified along with function being tested. This is essentially what your library does. 2. Expectations are first recorded in some way (again with some explicit function calls or code under test being executed second time) and later tested against. From my experience first approach is unacceptable in most usage cases. Interactions based testing in it's nature is borderline "implementation testing". Thus it leads to expectations being changed comparatively frequently (in comparison to the interfaces and other instances of state based testing). Accordingly, you end up changing these expectation very frequently inside your test modules. It becomes very tiresome manual work if you have it in many test cases and what is worse it's frequently difficult to see immediately what actually has changed. For example there maybe new call somewhere in the middle and you end up reporting 10 errors of mismatched calls. Second approach has it's downsides either - we do not want your test code always look like duplicate. What I end up doing is logged based expectations. In this approach your code looks like you just create some mocks and execute test function. This test case can be run in two "modes". Log mode and test mode. In first mode it stores expectations in log file. In second mode it test against it. If there are some differences reported you can generate new log file and compare it using regular diff, thus easily find what changed. In majority of the cases you find that changes are expected, you replace log file and that's it. No changes to the test code is required. As for your statement that my approach requires to implement mocks and thus decease maintainability, I believe it's actually quite the opposite. Instead of having to tell in 50 different test cases that now we expect this call, that call and third call. I implement mock *once* and do not need to encode expectation nowhere anymore. From what I can tell, my approach covers all that you can do (while being portable) and some that you can't. For example your library can't be used for exception safety testing, while Boost.Test solution includes support foe "decision points" inside mocks that enables it. Boost.Test interaction based testing support is not 100% production quality for my taste yet (and obviously lacking docs), but I still prefer it to what you present. It might make sense to combine these approached in one comprehensives library. If one for whatever reason prefers explicit expectation specification, one should be able to do so I guess. Also logging might needs to be made a bit more powerful.
In "record" mode it should not throw, it should log what it see. Also framework should report all diffs, not the first one.
I am not sure about general community opinion, but having library completely designed and built on non-standard implementation details of the compilers is not something I'd like to see in boost (Obviously not an issue if it's not the case). Regards, Gennadiy

2009/6/10 Peter Bindels <dascandy@gmail.com>
This looks very interesting. I have been looking for a proper mocking framework for C++ for a while. I looked at the implementation, and it looks very good. One comment: Registering overloaded functions can be done simpler: struct Const {}; struct Volatile {}; struct NoQualifier {}; #define constMOCK_EATER ,Const #define volatileMOCK_EATER ,Volatile #define MOCK_EATER ,NoQualifier template<typename Class,typename Signature,typename Qualifier> class DeduceMemberFunction; template<typename Class,typename R,typename A0> class DeduceMemberFunction<Class*,R(A0),Const> { typedef R (Class::*Type) const; }; //And a gazillion other overloads on signatures: #define OnCallOverload(obj, signature,func) RegisterExpect_<__LINE__, DontCare>(obj, typename DeduceMemberFunction<BOOST_TYPEOF(obj),BOOST_PP_CAT(signature,MOCK_EATER)>::Type(func), #func, __FILE__) Usage: class IBar { int Test(int,double) const; int Test(int,double); void Test(double); }; mocks.OnCallOverload(barMock,int(int,double) const,IBar::c); (The code is not tested, but I have a similar layout in a library I am developing where this technique is used) Regards Peder

On Wed, Jun 10, 2009 at 2:43 AM, Peter Bindels<dascandy@gmail.com> wrote:
This is a great idea for polymorphic interfaces but some interfaces are not polymorphic. It would be nice if we could provide a mock implementation of the internal interface even in case none of the calls to that interface are virtual. Think C API used as your internal interface: struct bar; bar * create_bar(....); void destroy_bar( bar * ); void use_bar( bar * ); Basically you can alter the implementation by just linking with a different library instead of bar (I'm not sure how this would work with the rest of the mock library.) Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

Hi, Gennadiy asked me to "give short description as to the approach taken by your solution (BMock), why specifically you choose to do it in this way and what are the advantages feature wise in comparison with what I have in mock_object." BMock was developed following the "simples thing, which could probably work" approach for unit testing and especially Test Driven Development of embedded C/C++ software on PC. Although ideologically it did follow Java easymock library it took a very dramatic depart: BMock does not mock objects (or classes), but rather individual functions and methods. It is not a mock objects framework, but rather a mock FUNCTIONS framework. This is probably the main difference from any other C++ mock framework I’m familiar with, except for, perhaps, MockItNow. The latter, as far as I was able to understand took a similar approach, but it relies on profiler API, which I was trying to avoid. The reason for mocking functions rather than objects was very simple: I needed to mock all kind of weird, sometimes very nasty, "C" API functions coming from embedded systems middleware. The whole point was to spend as much time, as possible on PC, since running a single test cycle on the target platform could be about 5-10 minutes. Sometimes the target platform did not exist yet (remember Dijkstra’s THE system?). That was my dilemma. I wanted to run my code on PC, but sooner or later I hit the wall of the need to use some underlying API function. This API might not compile on PC at all unless you are willing to bring under your test harness half of Embedded Linux. Not all target code was written in C++, sometimes it was plain “C”. I still wanted my test environment to utilize the whole power of modern C++ and for that reason I choose Boost.Test. Even when C++ was an option, virtual functions might be not. There are a number of security and specific hardware reasons for this. Therefore my goal was to be able to mock ANY C function and/or ANY C++ method in the most possible non-intrusive and lightweight fashion. I did not find anything better than to just wrap every function I wanted to mock with an IDL-like annotation developed using Boost.Preprocessor. During the time my colleges and I discovered that this is a pretty powerful mechanism. I could convert to mock every function within the scope of particular test (normally it’s done in the fixture class constructor) as long as this function definition is wrapped with BMOCK annotation. Quite soon we came with a conclusion that wrapping functions is a negligible price to be paid for convenience of flexible unit testing. The second major decision was to provide as much information as possible at the level of function definition rather than at the individual test level. The problem is again with “C” APIs. Consider, for example, the famous size_t read(void *, size_t); function. There no way under the heaven by which one could guess that the second argument states how much memory is reserved for the first argument. In C++ it would be possible to use smart objects, but not in “C”, and as I said that was my first priority. The decision made in BMock was to reflect these relationships explicitly in the IDL-like annotations, something like BMOCK_FUNCTION(size_t, read, 2, (RAW_MEM(OUT, void *, buf, maxLen), IN(size_t,maxLen))). What it says is that function read returns size_t, and accepts two arguments. The first argument is of so-called raw memory type (handled using plain memcpy), is output-only argument (no need to record expected value, but needs to return some value supplied by the test), and its output capacity is limited by the second argument. The second argument is just an input (need to record and compare its value). Using this simple mechanism we were able to prevent a countless number of memory trashes, which is a real plague of the embedded world. BMock, adopted the record/reply model used by almost all mock frameworks. That is you first call mock functions in expected order with expected in and out values, then switch to reply mode and the library checks real values against expectations. I deliberately excluded from BMock support for many advanced features, such as non-strict values, ignore, non-strict order, etc. All this could be done, but in my experience if unit test is not short it will be more burden than asset. Less is really more here. For testing complex permutations and integrated tests I personally prefer Bob Martin’s Fitnesse. In many aspects this is a matter of taste and experience. There is nothing which would prevent adding more complex features to BMock as long as C++ allows them. I just did not have enough motivation to do it. What was done is supporting so called console mode. Here rather than to validate actual values against recorded BMock just prints all input arguments and reads values for all output arguments to/from standard input/output. This allows a developer to play with her software in a simple interactive mode or to run more complex tests using pre-recorded logs (sounds like mock_object does something similar). One of my colleagues told me that was her first time she was able to understand how her module did really work. Using the same mechanism we were able to integrate BMock-based modules with Java version of Fitnesse. Today I would probably not do it but rather would consider using Python module of Fitnesse/Slim (I admit it might sound too unclear, but it will take too much space/time to elaborate on all details). Unfortunately support for console mode introduced some annoying complications to BMock core functionality and today I would prefer to completely decouple them. To sum up. BMock is about mocking functions and methods (even inline!) rather than objects. The price to paid is that the function/method needs to be wrapped with an IDL-like macro. This macro is completely stripped-down when production version is built. It’s not an ideal solution (for examples messes up Intellisense), but it was good for our practical needs. As I said elsewhere I do not think that BMock is a candidate for an independent contribution to Boost, neither I think it should be. I think that more practical solution would be add a simple and lightweight extension to Boost.Test. Let me know if you need more information. -- View this message in context: http://old.nabble.com/Interest-check%3A-Boost.Mock-tp23958875p26929135.html Sent from the Boost - Dev mailing list archive at Nabble.com.
participants (8)
-
Asher Sterkin
-
David Abrahams
-
Emil Dotchevski
-
Gennadiy Rozental
-
John Maddock
-
Peder Holt
-
Peter Bindels
-
Steve M. Robbins