[rspec-users] Spec'ing via features

David Chelimsky dchelimsky at gmail.com
Tue Nov 25 09:38:54 EST 2008

On Tue, Nov 25, 2008 at 12:52 AM, Ben Mabey <ben at benmabey.com> wrote:
> Andrew Premdas wrote:
>> I came across this idea of dropping unit tests for acceptance tests in
>> the java world. It didn't like it there and I don't like it here, but
>> maybe thats because I'm an old fuddy duddy or something :). I do think
>> that every public method of an object should be specifically unit
>> tested, and yes that means that if you refactor your object you should
>> refactor your unit tests. This isn't really that much of a burden if
>> you design your objects to have simple and minimal public api's in the
>> first place.
> +1
>> What is that makes you think you can refactor code run acceptance
>> tests and be save without unit tests? Writing tests "that guarantee
>> the correct functioning of the system" isn't something you can just
>> do. Best you can hope for with acceptance tests is that part of the
>> system functions correctly most of the time in some circumstances.
>> Perhaps its the BDD ideal that your only writing the code you need to
>> make your acceptance tests pass, that make you think your acceptance
>> tests cover all your code. However just because you've written minimal
>> code to make an acceptance test pass doesn't mean that you can't use
>> this code in a multitude of different ways
>> Do you really think that BDD created code is just some black box that
>> you can tinker around with restructure and still be sure it works just
>> because your black box tests still work?
>> I just don't believe you can get the coverage you need for an
>> application using acceptance testing / features alone. If you do
>> actually write enough features to do this you'll end up doing much
>> more work than writing unit tests combined with features.
> +1 again.
>> All best
>> Andrew
> Here is how I look at the two sets of tests...
> Features at the application level (acceptance tests) instill more confidence


That and, as Kent Beck describes today, responsible software, are why
we do testing at all.

> in me about the correctness of the system's behavior. Object level code
> examples (unit tests) instill more confidence in me about the design of the
> system.
> With acceptance tests passing we have no guarantee about the state of the
> design.  Remember, TDD/BDD naturally produces easy to test objects and by
> skipping object level examples you run the risk of creating dependent laden,
> highly coupled objects that are hard to test.  (Just think, you can make all
> of your features, for a web app, pass by writing the app in PHP4 with no
> objects at all :p .)

Which is not an inherently bad deal, if that's your comfort zone, and
if that's the comfort zone of *everybody* on your team.

> I also think that acceptance tests are too slow to be used in all
> refactorings and they are not fine grained enough so you'll end up doing
> more debugging than you would otherwise with good object level coverage.  I
> generally try to keep each individual unit test faster than a tenth of a
> second, as suggested in 'Working Effectively With Legacy Code'.  What
> results is an extremely fast suite that can be used to quickly do
> refactorings.  I have experienced the pain of using just Cucumber features
> first hand-- finding bugs on this level is just not as fast object level
> examples.  If you skip object level examples you are incurring a technical
> debt that you will feel down the road, IMO.
> Someone at the start of this thread had wondered what people had learned
> when they went through this process of balancing FIT tests with unit tests.

I can speak to this a bit. Maybe more than a bit.

When I was working with .NET FitNesse and NUnit, we had very high
levels of coverage in NUnit. Early on one project I told Micah Martin
(who co-created FitNesse with Bob Martin) that I was concerned about
the duplication between our FitNesse tests and NUnit tests and
questioned the value of keeping it.

Micah pointed out reasons that made absolute 100% perfect sense in the
context of the project we were working on. The customers were
encouraged to own the FitNesse tests. They were stored on a file
system, backed up in zip files, while the NUnit tests were stored in
subversion with the code. The FitNesse fixtures were stored with the
application code, distant from the FitNesse tests.

In order to foster confidence in the code amongst the developers,
having a high level of coverage in NUnit made sense, in spite of the
duplication with some of the FitNesse tests.

That duplication, by the way, was only in terms of method calls at the
highest levels of the system. When a FitNesse test made an API call,
that message went all the way to the database and back.

When an NUnit test made the same call, that message typically got no
further than the object in the test, using stubs and mocks to keep it

Now fast forward to our current discussion about Cucumber and RSpec.
As things stand today, we tend to store .feature files right in the
app alongside the step_definitions and the application code.

The implications here are different from having a completely decoupled
acceptance testing system. I'm not saying that abandoning RSpec or
Test::Unit or whatever is the right thing to do. But I certainly feel
less concerned about removing granular code examples, especially on
rails/merb controllers and views, when I've got excellent coverage of
them from Cucumber with Webrat. Thus far I have seen a case where I
couldn't quickly understand a failure in a view or controller based on
the feedback I get from Cucumber with Webrat.

But this is mostly because that combination of tools does a very good
job of pointing me to the right place. This is not always the case
with high level examples. If you're considering relaxing a requirement
for granular examples, you should really consider each case separately
and include the level of granularity of feedback you're going to get
from your toolset when you make that decision.

Now this is how *I* see things.

For anybody who is brand new to all this, my feeling is that whatever
pain there is from duplication between the two levels of examples and
having to change granular examples to refactor is eclipsed by the pain
of debugging from high level examples.

Also, as I alluded to earlier, every team is different. If you are
working solo, the implications of taking risks by working
predominantly at higher levels is different from when you are on a
team. The point of testing is not to follow a specific process. The
point is to instill confidence so you can continue to work without
migraines, and deliver quality software.


>  While I know some people on this list could provide some first hand
> experience, I think this post by Bob Martin should provide some good
> insight:
> http://blog.objectmentor.com/articles/2007/10/17/tdd-with-acceptance-tests-and-unit-tests
> - Ben Mabey
>> 2008/11/25 Raimond Garcia <lists at ruby-forum.com>:
>>>> Wow, if that's it in a nutshell... :)
>>>> Pat
>>> Thanks Pat, great summary.
>>> I have to admit that I'm as crazy as Yehuda,
>>> and believe that all we need are just acceptance tests,
>>> at different layers of abstraction, for clients and developers.
>>> I also see the benefits of speccing out single object's behaviors, with
>>> the aim of a good design.
>>> However, the drawbacks of doing this out-weight the benefits, in my
>>> opinion.
>>> Testing how every method of an object is going to behave,
>>> implies that after refactoring, that spec will no longer be useful,
>>> eventhough the business and application logic stay the same.
>>> I believe that being able to come up with a good design,
>>> is not only dependent on writing tests before your implementation,
>>> but also on knowing how to write a good implementation.
>>> This can be gained through experience,
>>> reading books, blogs, pair-programming,
>>> using tools to tell you about the complexity of your code,
>>> and a constant process of refactoring as we complete requirements,
>>> and then fully understand what the best design could be.
>>> Therefore in my opinion, by writing tests that guarantee
>>> the correct functioning of the system, we have a robust suite of tests.
>>> Let the refactoring come storming in and change the whole
>>> implementation,
>>> but the tests should not be affected at all,
>>> as I'm not testing my implementation nor design,
>>> only the correct functioning of the system,
>>> and relying on other tools on top of tests to maintain my code
>>> nice, clean and understandable by anyone that comes along.
>>> Kind Regards,
>>> Rai
>>> --
>>> Posted via http://www.ruby-forum.com/.
>>> _______________________________________________
>>> rspec-users mailing list
>>> rspec-users at rubyforge.org
>>> http://rubyforge.org/mailman/listinfo/rspec-users
>> _______________________________________________
>> rspec-users mailing list
>> rspec-users at rubyforge.org
>> http://rubyforge.org/mailman/listinfo/rspec-users
> _______________________________________________
> rspec-users mailing list
> rspec-users at rubyforge.org
> http://rubyforge.org/mailman/listinfo/rspec-users

More information about the rspec-users mailing list