[rspec-users] validate_presence_of

Stephen Eley sfeley at gmail.com
Thu Feb 19 12:15:35 EST 2009


On Thu, Feb 19, 2009 at 10:55 AM, David Chelimsky <dchelimsky at gmail.com> wrote:
>
> This is where this all gets tricky.

Yep.  >8->


> TDD (remember? that's where this all started) says you don't write any
> subject code without a failing *unit test*. This is not about the end
> result - it's about a process. What you're talking about here is the
> end result: post-code testing.

Yes.  And I didn't.  The test "it 'requires a login'" fails until I
write a validation for the login field.  I don't write the validation
until I have that test.  Once that test is written, any way of
validating login's presence -- with validates_presence_of in AR, or a
:nullable => false on the property in DataMapper, or a callback before
saving, or whatever -- will pass the test.  I have written the code to
pass the test, and I have followed TDD principles.  I can now move on
to the next problem.

But I did not write any code yet setting the message.  Because I
haven't written any tests for the message.  At this point I don't care
what the message is, just that I have the right data.  I care about
the message when I start focusing on presentation.  When I write specs
for the exchange with the user, I will write a test.  I might reopen
the model's spec and add it there (maintaining 'unit test' purity), or
I might put it in the request spec, but either way a test will break
before the code is written.

I think that keeps the *spirit* of TDD, whether or not it follows its
shelving rules.  And yes, I know it all comes down to "it depends."
On a larger project that would have a lot of people on it, I'd
probably insist on more formalism for the sake of keeping things
organized.  But if it's a small app with a focus on shipping fast and
frequently, having one test that fails is enough.


> If you're true to the process, then you'd have material in both
> places. The cost of this is something that looks like duplication, but
> it's not really, because at the high level we're specifying the
> behaviour of the system, and at the low level we're specifying the
> behaviour of a single object - fulfilling its role in that system.

And again: the extent to which I'd do that is the extent to which I
care how the system is organized.  Sometimes it really does matter.
More often, to me, it doesn't.  If an integration spec breaks, there's
*usually* no mystery to me because I can just look at the backtrace to
see what broke and fix it in a few seconds.  Writing low-level specs
to help isolate what's obvious and quickly fixed without them doesn't
save time.  Sometimes it is more complicated and confusing, and if it
takes me too long to understand why the high level is broken, I'll
sometimes write more unit specs to figure it out.

That's not backwards.  A test still broke.  If I always have at least
one test that fails on any incorrect behavior that matters, and I
never ship with failing tests, then my testing has satisfactory
coverage, whether it's an integration test or a unit test or a highly
trained hamster reviewing my log files.  Having more tests and finer
detail only matters if it saves me time.  (Which, sometimes, it does.)

That's just my opinion.  Not the law.

-- 
Have Fun,
   Steve Eley (sfeley at gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org


More information about the rspec-users mailing list