[rspec-users] respond_to? check in rspec-mocks
dchelimsky at gmail.com
Sat Aug 28 10:36:25 EDT 2010
On Aug 27, 2010, at 7:18 PM, Myron Marston wrote:
> One of the primary dangers of using mocks is that your unit tests may
> be testing against an interface that is different from that of your
> production objects. You may simply have misspelled the method (e.g.
> object.should_receive(:methd_name) rather than method_name), or you
> may have changed the interface of your production object without
> updating your tests.
> Obviously, you should have some integration coverage that will catch
> these kinds of errors (and I do), but it's nice when they're caught by
> your unit tests since they're so much faster than integration tests.
> I've been using a pattern to help with this for a while:
> it "safely mocks a method" do
> object.should respond_to(:foo)
> Basically, I add a respond_to? check before mocking or stubbing a
> concrete object (obviously, I don't do this for a pure mock object).
> If/when I rename the mocked method, I'll get a test failure. I think
> it'd be nice to add this to rspec-mocks itself. A few additional
> thoughts about this potential feature:
> * This would only apply when you're mocking/stubbing concrete objects;
> on a pure mock or stub it wouldn't do the check.
> * Should this print a warning or raise an error so the test fails?
> * Should it be configurable? I could see some people not wanting this
> feature, particularly if you're strictly following the outside-in BDD
> process where the specs on the outer layers (say, a controller in a
> rails app) mock methods that have not yet been defined on the inner
> layers (say, a model in a rails app).
> * This feature could potentially take things a step further and when
> you specify mock arguments using `with`, it could check the arity of
> the method and be sure that the method accepts that number of
> What do people think about this idea?
This idea has come up numerous times on this list over the years, but have yet to see the suggestion or patch that makes it work for me. It's definitely not something I've ever felt RSpec was missing, probably because I tend to write specs at multiple levels and I don't recall ever deploying something to production that failed due to an API getting misaligned. Not saying it's never come up in the development process, but the restrictions imposed by such a feature, in my view, would cost me more than the safety net it provides.
My other objection is that we're dealing with a dynamic language here, and there are going to be cases in which methods are defined dynamically. For average users, this is likely not a problem (as long as the check is done at the time the stub is invoked rather than when the stub is defined), but for anyone starting to explore meta-programming this would make things more confusing, IMO.
I've also seen plenty of cases where respond_to fails to handle a case that method_missing handles. In these cases, users would get misleading information back, making things more confusing.
With all this, there is one idea that I've floated that I'd be open to: an audit flag which could be used to generate a report separate from the spec run. Specs would not fail due to any misalignment, but you'd simply get report saying something like:
Spec: Account#deposit adds deposited amount to its balance # ./spec/bank/account_spec.rb:37
Stub: ledger.record_deposit # ./lib/bank/account.rb:42
- ledger object did not respond to #record_deposit
- ledger.methods => [ ... list of public instance methods that are not already part of Object ... ]
This would be disconnected enough from the examples that it would stay out of the way, and it would make misalignments (the common case) easy to see.
More information about the rspec-users