[rspec-users] Mocking, changing method interfaces and integration tests

David Chelimsky dchelimsky at gmail.com
Fri May 25 08:44:27 EDT 2007

On 5/25/07, Courtenay <court3nay at gmail.com> wrote:
> Wow, you did a great job of writing up that irc discussion we had.
> I just had a thought; what about if there was a way of "running" your
> mock against the real thing to see if they (still) match up?

Aslak had a similar idea a while back:


I'd like to see something like this - but controllable from the
command line. So you can run things as you normally do now, or you can
do something like:

$ spec spec --mock_mismatch
Thing#instance_method (not implemented)
- spec/a_spec.rb:37
- spec/b_spec.rb:42

- spec/c_spec.rb:13 - (argument mismatch: 2 for 1)

etc. The goal would be to point you to the right places to look to
learn about what needs to change, not to try to decipher the problem
in detail beyond the high level.

I think this could help assuage some of Ruy's concerns, but not all.

Ruy: you stated that concerns are really about mocking, not about
RSpec. I'd expand that to say they are about testing in general. You
may want to float them on some other lists like
testdrivendevelopment at yahoogroups.com and
extremeprogramming at yahoogroups.com. The mocking questions could also
go to user at jmock.codehaus.org and mocha-developer at rubyforge.org.

Advanced apology if this sounds patronizing - but it sounds like
you're looking for some sort of silver bullet or "best practice". In
my experience, this is a road to disaster for two reasons. One, the
"best practice" gets in the way of actually thinking about problems.
Two, the acceptance of something as a "best practice" leads to a false
sense of security. "If I do this, then all will be right with the

What's worked for me has been a balance of programmer tests, customer
acceptance tests and exploratory testing. RSpec does a good job of
facilitating programmer tests, but can also help with acceptance tests
when coupled with other tools like selenium or watir (assuming you're
doing webapps).

Keep in mind that TDD evolved on teams that had employed customer
acceptance tests as well as programmer tests. If you're ONLY doing
programmer tests, you should probably mock a bit less, or divide your
tests up into class-level and integration tests. RSpec's own tests are
a good example of this. We don't have a clean separation of these, but
if you look through RSpec's examples you'll see some that feel like
unit tests, some that feel like integration tests and some that feel
like story tests (i.e. customer tests).

Hope that helps.


> Courtenay
> On 5/24/07, Ruy Asan <ruyasan at gmail.com> wrote:
> > Suppose we have a method 'foo' which internally uses another method 'bar'.
> >
> > Being good BDDers we mock out the 'bar' method. After all, we only want to
> > spec the 'foo' method - actually running the 'bar' method means slower, less
> > maintainable and brittler specs. That's why we <3 mocking, right?
> >
> > We rely on the fact that 'bar' is being adequately tested somewhere else, by
> > whomever wrote it.  They can change their implementation of 'bar' and we can
> > change our implementation of 'foo' and as long as the existing tests keep
> > passing everyone is happy.
> >
> > But what happens if 'bar' needs to be refactored in a way that changes the
> > interface? rSpec does succeed in informing bar's author that he is breaking
> > someone's expectations about the 'bar' function - however, he really has no
> > idea whose expectations he is breaking. (i.e. rSpec doesn't give us any
> > clues about what exactly has to follow our change in interface- it just
> > tells us something may be broken).
> >
> > Changing the interface of bar means going over the code the hard way,
> > getting in touch with everyone that uses the 'bar' function and making sure
> > everything still works. Not very agile! It makes changing interfaces a very
> > very expensive process. Of course, such changes are never really cheap - but
> > because of mocks it becomes really REALLY expensive.
> >
> > So what are our options?
> >
> > 1) Don't use mocks. Not using mocks would of course catch any such problems
> > right away. Yes mocks have benefits, but if we plan on refactoring often,
> > the above scenario may just be too high a price to pay.
> >
> > 2) Don't change interfaces (too often) - i.e. specs should be treated as
> > immutable. If it's such an expensive process just make sure it doesn't have
> > to happen that often ;)  I think if you have an all-vet, all-star team of
> > programmers this could work out just fine - but even then, having change be
> > expensive just isn't a good thing. Sometimes the app grows in a certain way
> > and changing an interface is simply The Right Thing To Do™ - but it will
> > probably break the app in a hard to fix way so Let's Just Stick With What
> > Works™ takes over :p
> >
> > 3) Rely on integration testing to catch these sort of bugs. This too can
> > work - although it seems to shift a significant burden on integration
> > testing. Doesn't it sort of imply we need 100% (or close to that)
> > integration test coverage? Doesn't that mean A LOT of integration testing,
> > and aren't integration tests horribly brittle, time consuming to write in
> > large numbers and a PITA to maintain?
> >
> > I'm actually kind of confused about how integration tests fit in with the
> > rSpec+Mocks way of doing things. I really don't hear of teams doing regular
> > integration testing in a true "let's test all our coupling"-sense. (and
> > speaking from experience, it really is a pain in the ass to do a lot of it -
> > SO BRITTLE!) At the same, it also seems kind-of necessary. In addition to
> > the above refactoring problem, there's the question of reliable your
> > external libraries really are (which you are going out of your way not to
> > test by using mocks). They are often buggy, often famously so. Having 100:1
> > rSpec to code ratio won't save you from IE6 bugs :p Having a
> > javascript-heavy rails app with lots of IE6 users (not exactly an edge case
> > with rspec users...) means these bugs are rather important. You can't rely
> > on rSpec to catch problems before pushing out a new version of your app.
> > Doesn't this take away one of the main benefits of automated testing?
> >
> > Just aside from this whole business - I'm wondering how others are dealing
> > with this problem? JS unit testing can help, no doubt, but there are lot and
> > lots of things that can wrong outside such tests, and like I
> > said, comprehensive unit tests are phenomenally hard to write and maintain.
> >
> > So basically all 3 of the above options are pretty crappy.
> >
> > One idea I had is to automatically translate
> > MyObj#should_receive(:method).and_return("value") into a
> > seprate spec for MyObj but... that actually just defeats the whole purpose
> > of mock objects in a very round-about way :p
> >
> > Another idea is to have a spec runner option which ignores all mocks and
> > stubs - using the real options instead. This run mode would ONLY be
> > triggered when someone changes an existing spec, specifically to to answer
> > the question of "whose code did i just break". However - I think this would
> > radically change the way specs have to be designed to really work. :/
> >
> > I'm curious to hear how I should deal with these problems. Since i've been
> > kinda rambly, I'll restate my complaints:
> >
> > (Note: when I say rSpec, I actually mean rSpec with heavy use of
> > mocking, which seems to be the recommended way to go. I am aware that rspec
> > != mocking, and that most of these complaints are actually more particular
> > to mocking then rspec, but nevertheless, rspec and mocks do go hand in hand
> > more often then not:)
> >
> > 1) If you change the interface of a function - all rSpec tells you is that
> > something may have broken (because your existing specs for the function in
> > question will fail initially). You don't actually know for a fact anything
> > broke, and you definitely don't know what broke. It's also very easy for
> > someone new to rSpec to not realize that changing an existing spec may lead
> > to un-detected failure somewhere else.
> >
> > 2) Mocking external libraries and/or access to external applications assumes
> > these always work as expected, which as we all know, is a dirty lie :p
> > (handy as it may be). rSpec does not protect you against bugs in the
> > libraries you're using in any way.
> >
> > 3) rSpec + Rails leaves a very size-able javascript and browser blind spot.
> > No way of dealing with this exists other then integration testing (a-la
> > selenium), and there is no accepted way of doing integration testing that is
> > both feasible (i.e. doesn't take for ever) and reliable (i.e. actually
> > covers a good part of your code).
> >
> >
> > Finally, I really should mention that I'm actually quite happy with rSpec
> > overall, and yes, I'm aware I'm asking a lot of rSpec here :)
> > _______________________________________________
> > rspec-users mailing list
> > rspec-users at rubyforge.org
> > http://rubyforge.org/mailman/listinfo/rspec-users
> >
> _______________________________________________
> rspec-users mailing list
> rspec-users at rubyforge.org
> http://rubyforge.org/mailman/listinfo/rspec-users

More information about the rspec-users mailing list