[rspec-users] testing behaviour or testing code?

David Chelimsky dchelimsky at gmail.com
Sun Sep 2 12:43:13 EDT 2007


On 9/2/07, Pat Maddox <pergesu at gmail.com> wrote:
> On 9/2/07, David Chelimsky <dchelimsky at gmail.com> wrote:
> > On 9/2/07, Pat Maddox <pergesu at gmail.com> wrote:
> > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote:
> > > > On 8/24/07, Pat Maddox <pergesu at gmail.com> wrote:
> > > > > On 8/24/07, David Chelimsky <dchelimsky at gmail.com> wrote:
> > > > > > describe Widget, "class" do
> > > > > >   it "should provide a list of widgets sorted alphabetically" do
> > > > > >     Widget.should_receive(:find).with(:order => "name ASC")
> > > > > >     Widget.find_alphabetically
> > > > > >   end
> > > > > > end
> > > > > >
> > > > > > You're correct that the refactoring requires you to change the
> > > > > > object-level examples, and that is something that would be nice to
> > > > > > avoid. But also keep in mind that in java and C# people refactor
> > > > > > things like that all the time without batting an eye, because the
> > > > > > tools make it a one-step activity. Refactoring is changing the design
> > > > > > of your *system* without changing its behaviour. That doesn't really
> > > > > > fly all the way down to the object level 100% of the time.
> > > > > >
> > > > > > WDYT?
> > > > >
> > > > > I think that example is fine up until the model spec.  The
> > > > > find_alphabetically example should hit the db, imo.  With the current
> > > > > spec there's no way to know whether find_alphabetically actually works
> > > > > or not.  You're relying on knowledge of ActiveRecord here, trusting
> > > > > that the arguments to find are correct.
> > > >
> > > > Au contrare! This all starts with an Integration Test. I didn't post
> > > > the code but I did mention it.
> > > >
> > > > > What I've found when I write specs is that I discover new layers of
> > > > > services until eventually I get to a layer that actually does
> > > > > something.  When I get there, it's important to have specs that
> > > > > describe what it does, not how it does it.  In the case of
> > > > > find_alphabetically we care that it returns the items in alphabetical
> > > > > order.  Not that it makes a certain call to the db.
> > > >
> > > > I play this both ways and haven't come to a preference, but I'm
> > > > leaning towards blocking database access from the rspec examples and
> > > > only allowing it my end to end tests (using Rails Integration Tests or
> > > > - soon - RSpec's new Story Runner).
> > >
> > > Now that I've had a chance to play with Story Runner, I want to
> > > revisit this topic a bit.
> > >
> > > Let's say in your example you wanted to refactor find_alphabetically
> > > to use enumerable's sort_by to do the sorting.
> > >
> > > def self.find_alphabetically
> > >   find(:all).sort_by {|w| w.name }
> > > end
> > >
> > > Your model spec will fail, but your integration test will still pass.
> > >
> > > I've been thinking about this situation a lot over the last few
> > > months.  It's been entirely theoretical because I haven't had a suite
> > > of integration tests ;)  Most XP advocates lean heavily on unit tests
> > > when doing refactoring.  Mocking tends to get in the way of
> > > refactoring though.  In the example above, we rely on the integration
> > > test to give us confidence while refactoring.  In fact I would ignore
> > > the unit test (model-level spec) altogether, and rewrite it when the
> > > refactoring is complete.
> > >
> > > Here's how I reconcile this with traditional XP unit testing.  First
> > > of all our integration tests are relatively light weight.  In a web
> > > app, a user story consists of making a request and verifying the
> > > response.  Authentication included, you'll be making at most 3-5 HTTP
> > > requests per test.  This means that our integration tests still run in
> > > just a few seconds.  Integration tests in a Rails app are a completely
> > > different beast from the integration tests in the Chrysler payroll app
> > > that Beck, Jeffries, et al worked on.
> > >
> > > The second point of reconciliation is that mock objects and
> > > refactoring are two distinct tools you use to design your code.  When
> > > I'm writing greenfield code I'll use mocks to drive the design.  When
> > > I refactor though, I'm following known steps to improve the design of
> > > my existing code.  The vast majority of the time I will perform a
> > > known refactoring, which means I know the steps and the resulting
> > > design.  In this situation I'll ignore my model specs because they'll
> > > blow up, giving me no information other than I changed the design of
> > > my code.  I can use the integration tests to ensure that I haven't
> > > broken any behavior.  At this point I would edit the model specs to
> > > use the correct mock calls.
> > >
> > > As I mentioned, this has been something that's been on my mind for a
> > > while.  I find mock objects to be very useful, but they seem to clash
> > > with most of the existing TDD and XP literature.  To summarize, here
> > > are the points where I think they clash:
> > >
> > > * Classical TDD relies on unit tests for confidence in refactoring.
> > > BDD relies on integration tests
> > > * XP acceptance tests are customer tests, whereas RSpec User Stories
> > > are programmer tests.  They can serve a dual-purpose because you can
> > > easily show them to a customer, but they're programmer tests in the
> > > sense that the programmer writes and is responsible for those
> > > particular tests.
> > >
> > > In the end it boils down to getting stuff done.  After a bit of
> > > experimentation I'm thinking that the process of
> > > 1. Write a user story
> > > 2. Write detailed specs using mocks to drive design
> > > 3. Refactor, using stories to ensure that expected behavior is
> > > maintained, ignoring detailed specs
> > > 4. Retrofit specs with correct mock expectations
> > >
> > > is a solid approach.  I'd like others to weigh in with their thoughts.
> >
> > Hey Pat,
> >
> > I really appreciate that you're thinking about and sharing this as its
> > something that weighs on a lot of people's minds and it's clear that
> > you have some understanding of the XP context in which all of this was
> > born.
> >
> > That said, I see this quite a bit differently.
> >
> > I don't think this has anything to do w/ TDD vs BDD. "Mock Objects" is
> > not a BDD concept. It just feels that way because we talk more about
> > interaction testing, but interaction testing predates BDD by some
> > years.
>
> Hi David,
>
> Thanks so much for your thoughtful reply.

Thanks for your thought provoking post!

> You're right, and I didn't mean to suggest that mock objects were a
> BDD concept at all.  However it seems to me that BDDers embrace mock
> objects as a very useful design tool, whereas classical TDDers would
> use them sparsely, when a resource is expensive or difficult to use
> directly.

This is true to some extent, but the mock objects paper, which
introduced the idea of mocks-as-design-tool
(http://mockobjects.com/files/mockrolesnotobjects.pdf) was presented
at OOPSLA 04, and the thinking that it came from had already been
evolving.

> For example, Beck talks about mocking a database in his
> book, and that's that.  Astels demonstrates mocking the roll of a die.
> He does briefly use mocks before he's ready to implement the GUI part
> of the app.
>
> Those are the two TDD books with which I'm most familiar.  I'm sure a
> lot has changed in the TDD community since then, and indeed you can
> see that Astels' mentality has changed somewhat.  His "one assertion
> per test" article [1] parses an address and then verifies it by
> asserting the getters.  His remake, "one expectation per example" [2]
> is a bit different in that he passes a mocked builder in and uses that
> to verify that the parsing code works, exposing no getters at all.
> That to me signifies a fundamental shift in TDD thought.  Instead of
> thinking about objects in isolation and what services they provide, we
> think of the services an object provides and how it interacts with
> other objects and uses their services.
>
> I'm certain that it's not a new way of thinking, but hopefully you can
> see why I'd believe it's probably not mainstream.
>
> There's one other roadblock to my thinking, and it results from using
> RSpec almost exclusively within Rails projects.  I think it's obvious
> why you mock models when writing view and controller specs.  However
> less obvious to me is why mock associations in model specs, and I
> think it has to do with the fact that AR couples business and
> persistence logic.

Absolutely! AR presents quite a testing conundrum. It's clear from the
testing approach supported by Rails directly that decoupling from the
database is simply not of interested to DHH and company. Or at least
it wasn't early on. I see mock frameworks starting to appear in the
Rails codebase, so perhaps this is changing. And I don't mean to
suggest that the Rails core team approach is the wrong approach. It
simply does not align with what you've called "classical TDD
thinking".

> If we just had domain objects that never hit a database, then we might
> initially mock interactions but then use concrete instances when we
> later implemented those classes.  When I think of Beck's Money
> example, or Martin Fowler's video rental list in Refactoring, it seems
> silly to me to use mocks in those cases.

I think you're right. Even if you're going down what I view as the
ideal mockists path - mocking everything that you need that doesn't
exist yet - I've often used mocks in process, but replaced them w/ the
real deal once the real objects existed. Then you're really using
mocks for what they're most powerful at: interface discovery. And then
disposing of them once they've passed their usefulness in a given
situation.

In the case of AR, I keep them around to keep from hitting the DB.

> Perhaps you might at the
> very beginning, but you'd sub real objects in as you implemented them.

D'oh! You ARE an ideal mockist!

>  We don't do this with AR because they're simply too heavy.

Funny - I'm tempted to remove what I wrote above - but this is fun -
responding as I go and then discovering that you already made the same
point.

> This culminates in another general idea I've had which is to mock
> services in a lower layer, and use concrete instances for objects in
> the same layer when possible.  If we were to split AR into domain
> objects and a data access layer, the domain objects would mock calls
> to the data access layer but use concrete domain objects in the tests.
>  The unit tests remain fast and simple, and mocks no longer get in the
> way of refactoring.

Ay, there's the rub.

The problem we face is that AR promises huge productivity gains for
the non TDD-er, and challenges the thinking of the die-hard TDD-er.

I've gone back and forth about whether it's OK to test validations like this:

it "should validate_presence_of digits" do
  PhoneNumber.expects(:validates_presence_of).with(:digits)
  load "#{RAILS_ROOT}/app/models/phone_number.rb"
end

On the one hand, it looks immediately like we're testing
implementation. On the other, we're not really - we're mocking a call
to an API. The confusion is that the API is represented in the same
object as the one we're testing (at least its class object). I haven't
really done this in anger yet, but I'm starting to think it's the
right way to go - especially now that we have Story Runner to cover
things end to end. WDYT of this approach?

>
> Of course then you're writing integration tests at a fairly low level
> I guess, but that's 100% acceptable to me in the interest of getting
> stuff done rather than being dogmatic.

+ 1 - in the end this is all about getting stuff done and knowing WHEN
you're done.

> > The problem we experience with mocks relates to the fact that
> > we've chosen to live in the beautiful, free, dynamically typed and
> > POORLY TOOLED land of Ruby. When Ruby refactoring tools catch up with
> > those of java and .NET, this pain will all go away.
> >
> > For example - if I'm in IntelliJ in a java project and I have a method
> > like this:
> >
> >   model.getName()
> >
> > and I'm using jmock (the old version), which uses Strings for method names:
> >
> >   model.expects(once()).method("getName").will(returnValue("stub value"))
> >
> > and I do a Rename Method refactoring on getName(), IntelliJ will ask
> > me if I want to change the strings it finds that match getName as well
> > as the method invocations.
> >
> > In Ruby, we do this now w/ search and replace. Not quite as elegant.
> > But under the hood, that's all IntelliJ is doing. It just makes it
> > feel like an integrated step of an automated refactoring.
>
> Agreed.  I guess for me it's easier to get the production code right
> and then fix the tests after the fact.  I'd hate to do all the work of
> changing the production and test code and then find out it was
> incorrect.  Fixing tests after fixing the production code amounts to
> the same work as doing it all in one step, because as you mentioned
> it's essentially a manual process.
>
> > re: Story Runner. The intent of Story Runner is exactly the same as
> > tools like FIT, etc, that are typically found in the Acceptance
> > Testing space in XP projects. In my experience using FitNesse, it was
> > rare that a customer actually added new tests to a suite. If there
> > were testing folks on board, they would do it (and they would likely
> > be equipped to do it in Story Runner as well), but if not, then the
> > FitNesse tests were at best the result of a collaborative session with
> > the customer and, at worst, our (developers), interpretation of
> > conversations we had had with the customer.
> >
> > I see Story Runner fitting in exactly like that in the short run. I
> > can also see external DSLs emerging that let customers actually write
> > the outputs that Story Runner should produce and run that through a
> > process that writes what we're writing now in Story Runner. But that's
> > probably some time off.
> >
> > I totally agree with your last statement that "it boils down to
> > getting stuff done." And your approach seems to be the approach that I
> > take, given the tools that we have. But I really think its about tools
> > and not process. And I think that BDD is a lot more like what really
> > experienced TDD'ers do out of the gate. We're just choosing different
> > words and structures to make it easier to communicate across roles on
> > a team (customer, developer, tester, etc).
>
> So "ideally," who would write Story Runner stories?  I put it in
> quotes because I think it would differ greatly depending on the work
> environment, what kind of level of interaction you have with the
> customer, etc.  Using TDD terms, would we consider SR stories to be
> Customer or Developer tests?  I gather from your insight that they're
> Customer tests.

Yes - in my view they are Customer Tests - but bear in mind that that
means "tests created by the person acting in the customer role." On a
team of one, that might be the same person as the developer.

> Finally I agree 100% on not focusing on process.  I'm trying to figure
> out the most effective process given the tools currently available,
> and will be constantly changing it as more/better tools come along.
> Although I suppose what I should really be spending my energy on is
> building the tools that will make all our lives better ;)

Patches always welcome!

Cheers Pat.

David

>
> Pat
> _______________________________________________
> rspec-users mailing list
> rspec-users at rubyforge.org
> http://rubyforge.org/mailman/listinfo/rspec-users
>


More information about the rspec-users mailing list