[rspec-users] Spec'ing via features
matt at mattwynne.net
Tue Nov 25 14:16:41 EST 2008
On 25 Nov 2008, at 17:26, Ben Mabey wrote:
> David Chelimsky wrote:
>> On Tue, Nov 25, 2008 at 12:52 AM, Ben Mabey <ben at benmabey.com> wrote:
>>> Andrew Premdas wrote:
>>>> I came across this idea of dropping unit tests for acceptance
>>>> tests in
>>>> the java world. It didn't like it there and I don't like it here,
>>>> maybe thats because I'm an old fuddy duddy or something :). I do
>>>> that every public method of an object should be specifically unit
>>>> tested, and yes that means that if you refactor your object you
>>>> refactor your unit tests. This isn't really that much of a burden
>>>> you design your objects to have simple and minimal public api's
>>>> in the
>>>> first place.
>>>> What is that makes you think you can refactor code run acceptance
>>>> tests and be save without unit tests? Writing tests "that guarantee
>>>> the correct functioning of the system" isn't something you can just
>>>> do. Best you can hope for with acceptance tests is that part of the
>>>> system functions correctly most of the time in some circumstances.
>>>> Perhaps its the BDD ideal that your only writing the code you
>>>> need to
>>>> make your acceptance tests pass, that make you think your
>>>> tests cover all your code. However just because you've written
>>>> code to make an acceptance test pass doesn't mean that you can't
>>>> this code in a multitude of different ways
>>>> Do you really think that BDD created code is just some black box
>>>> you can tinker around with restructure and still be sure it works
>>>> because your black box tests still work?
>>>> I just don't believe you can get the coverage you need for an
>>>> application using acceptance testing / features alone. If you do
>>>> actually write enough features to do this you'll end up doing much
>>>> more work than writing unit tests combined with features.
>>> +1 again.
>>>> All best
>>> Here is how I look at the two sets of tests...
>>> Features at the application level (acceptance tests) instill more
>> That and, as Kent Beck describes today, responsible software, are why
>> we do testing at all.
>>> in me about the correctness of the system's behavior. Object level
>>> examples (unit tests) instill more confidence in me about the
>>> design of the
>>> With acceptance tests passing we have no guarantee about the state
>>> of the
>>> design. Remember, TDD/BDD naturally produces easy to test objects
>>> and by
>>> skipping object level examples you run the risk of creating
>>> dependent laden,
>>> highly coupled objects that are hard to test. (Just think, you
>>> can make all
>>> of your features, for a web app, pass by writing the app in PHP4
>>> with no
>>> objects at all :p .)
>> Which is not an inherently bad deal, if that's your comfort zone, and
>> if that's the comfort zone of *everybody* on your team.
>>> I also think that acceptance tests are too slow to be used in all
>>> refactorings and they are not fine grained enough so you'll end up
>>> more debugging than you would otherwise with good object level
>>> coverage. I
>>> generally try to keep each individual unit test faster than a
>>> tenth of a
>>> second, as suggested in 'Working Effectively With Legacy Code'.
>>> results is an extremely fast suite that can be used to quickly do
>>> refactorings. I have experienced the pain of using just Cucumber
>>> first hand-- finding bugs on this level is just not as fast object
>>> examples. If you skip object level examples you are incurring a
>>> debt that you will feel down the road, IMO.
>>> Someone at the start of this thread had wondered what people had
>>> when they went through this process of balancing FIT tests with
>>> unit tests.
>> I can speak to this a bit. Maybe more than a bit.
>> When I was working with .NET FitNesse and NUnit, we had very high
>> levels of coverage in NUnit. Early on one project I told Micah Martin
>> (who co-created FitNesse with Bob Martin) that I was concerned about
>> the duplication between our FitNesse tests and NUnit tests and
>> questioned the value of keeping it.
>> Micah pointed out reasons that made absolute 100% perfect sense in
>> context of the project we were working on. The customers were
>> encouraged to own the FitNesse tests. They were stored on a file
>> system, backed up in zip files, while the NUnit tests were stored in
>> subversion with the code. The FitNesse fixtures were stored with the
>> application code, distant from the FitNesse tests.
>> In order to foster confidence in the code amongst the developers,
>> having a high level of coverage in NUnit made sense, in spite of the
>> duplication with some of the FitNesse tests.
>> That duplication, by the way, was only in terms of method calls at
>> highest levels of the system. When a FitNesse test made an API call,
>> that message went all the way to the database and back.
>> When an NUnit test made the same call, that message typically got no
>> further than the object in the test, using stubs and mocks to keep it
>> Now fast forward to our current discussion about Cucumber and RSpec.
>> As things stand today, we tend to store .feature files right in the
>> app alongside the step_definitions and the application code.
>> The implications here are different from having a completely
>> acceptance testing system. I'm not saying that abandoning RSpec or
>> Test::Unit or whatever is the right thing to do. But I certainly feel
>> less concerned about removing granular code examples, especially on
>> rails/merb controllers and views, when I've got excellent coverage of
>> them from Cucumber with Webrat. Thus far I have seen a case where I
>> couldn't quickly understand a failure in a view or controller based
>> the feedback I get from Cucumber with Webrat.
>> But this is mostly because that combination of tools does a very good
>> job of pointing me to the right place. This is not always the case
>> with high level examples. If you're considering relaxing a
>> for granular examples, you should really consider each case
>> and include the level of granularity of feedback you're going to get
>> from your toolset when you make that decision.
>> Now this is how *I* see things.
>> For anybody who is brand new to all this, my feeling is that whatever
>> pain there is from duplication between the two levels of examples and
>> having to change granular examples to refactor is eclipsed by the
>> of debugging from high level examples.
>> Also, as I alluded to earlier, every team is different. If you are
>> working solo, the implications of taking risks by working
>> predominantly at higher levels is different from when you are on a
>> team. The point of testing is not to follow a specific process. The
>> point is to instill confidence so you can continue to work without
>> migraines, and deliver quality software.
> Thanks for sharing your experience and insight! Having never used
> FitNesse I didn't see that distinction at all. What you said makes a
> lot of sense.
Amen to that. Thanks guys, it's been a fascinating and enlightening
I am looking forward to the next chance I get to talk about this with
someone (who's interested!) over a beer.
I don't suppose any of you are going to XP Day, London, this year?
More information about the rspec-users