[rspec-users] Mocks? Really?

Zach Dennis zach.dennis at gmail.com
Sun Dec 30 01:42:03 EST 2007


On Dec 29, 2007 5:46 PM, Francis Hwang <sera at fhwang.net> wrote:
> I don't know if anyone else will find this thought useful, but:
>
>
> I think different programmers have different situations, and they
> often force different sorts of priorities. I feel like a lot of the
> talk about mocking -- particularly as it hedges into discussions of
> modeling, design as part of the spec-writing process, LoD, etc --
> implicitly assumes you want to spend a certain percentage of your
> work-week delineating a sensible class design for your application,
> and embedding those design ideas into your specs.

The fact is that you are going to spend time on designing, testing and
implementing anyways. It is a natural part of software
development. You cannot develop software without doing these
things. The challenge is to do it in a way that better supports the
initial development of a project as well as maintenance and continued
development.

>
> At the risk of
> sounding like a cowboy coder I'd like to suggest that some situations
> actually call for more tolerance of chaos than others.
>
>
> I can think of a few forces that might imply this:
>
> - Team size. A bigger team means the code's design has to be more
> explicit, because of the limits of implicity knowledge team members
> can get from one another through everyday conversation, etc.

This argument doesn't pan out. First, it's highly unlikely that
the same developers are on a project for the full lifetime of the
project. Second, this fails to account for the negative impact of bad
code and design. The negative impact includes the time it takes to
understand the bad design, find/fix obscure bugs and to extend with
new features or changing to existing ones.


> - How quickly the business needs change. Designs for medical imaging
> software are likely to change less quickly than those of a consumer-
> facing website, which means you might have more or less time to tease
> out the forces that would lead you to an optimal design.

This doesn't pan out either. Business needs also change at infrequent
intervals. Company mergers, new or updated policies, new or updated
laws, the new CEO wanting something, etc are things that don't happen
every day, but when they do happen it can have a big impact. The goal
of good program design isn't to add unnecessary complexity which
accounts for these.

The goal of good program design is to develop a system that is simple,
coherent and able to change to support the initial development of a
project as well as maintenance and continued development.

The ability to "change" is relative -- every program design can be
changed. There are certain practices and disciplines that can allow
for easier change though, change that reinforces the goal of good
program design. The Law of Demeter is one of them. Simple objects with
a single responsibility is another which reinforces the separation of
concerns concept. Testing is another.

The concept of an "optimal" design implies there is one magical design
that will solve all potential issues. This puts people in the "design,
then build" mindset -- the idea that if the design is perfect then all
you have to do is build it. We know this is not correct.


> In my case: I work in an small team (4 Rails programmers) making
> consumer-facing websites, so the team is small and the business needs
> can turn on a dime. From having been in such an environment for
> years, I feel like I've learned to write code that is just chaotic
> enough and yet still works. When I say "just chaotic enough", I mean
> not prematurely modeling problems I don't have the time to fully
> understand, but still giving the code enough structure and tests that
> 1) stupid bugs don't happen and 2) I can easily restructure the code
> when the time seems right.

The challenge is to write code that is not chaotic, and to learn to do
it in a way that allows the code to be more meaningful and that enhances
your ability to develop software rather then hinder it.



> In such environment, mocking simply gets in my way. If I'm writing,
> say, a complex sequence of steps involving the posting of a form,
> various validations, an email getting sent, a link getting clicked,
> and changes being made in the database, I really don't want to also
> have to write a series of mocks delineating every underlying call
> those various controllers are making. At the time I'm writing the
> spec, I simply don't understand the problem well enough to write good
> lines about what should be mocked where. In a matter of hours or days
> I'll probably end up rewriting all of that stuff, and I'd rather not
> have it in my way. We talk about production code having a maintenance
> cost: Spec code has a maintenance cost as well. If I can get the same
> level of logical testing with specs and half the code, by leaving out
> mocking definitions, then that's what I'm going to do.

I think we should make a distinction. In my head when you need to write code
and explore so you can understand what is needed in order to solve a problem I
call that a "spike".

I don't test spikes. They are an exploratory task which help me
understand what I need to do. When I understand what I need to do I
test drive my development. Now different rules apply for when you use
mocks. In previous posts in this thread I pointed out that I tend to
use a branch/leaf node object guideline to determine where I use mocks
and when I don't.


> As an analogy: I live in New York, and I've learned to have semi-
> compulsive cleaning habits from living in such small places. When you
> have a tiny room, you notice clutter much more. Then, a few years
> ago, I moved to a much bigger apartment (though "much bigger" is
> relative to NYC, of course). At first, I was cleaning just as much,
> but then I realized that I simply didn't need to. Now sometimes I
> just leave clutter around, on my bedside table or my kitchen counter.
> I don't need to spend all my life neatening up. And if I do lose
> something, I may not find it instantly, but I can spend a little
> while and look for it. It's got to be somewhere in my apartment, and
> the whole thing's not even that big.

Two things about this bothers me. One, this implies that from the
get-go it is ok to leave crap around an application code base. Two,
this builds on the concept of a "optimal" design; by way of
spending your life neatening up.

I am going to rewrite your analogy in a way that changes the meaning
as I read it, but hopefully conveys what you wanted to get across:
"
I do not want to spend the life of a project refactoring a code base
to perfection for the sake of idealogical views on what code should
be. I want to develop a running
program for my customer. And where I find the ideals clashing with
that goal I will abandon the ideals. Knowing this, parts of my
application may be clutter or imperfect, but I am ok with this and so
is my customer -- he/she has a running application.
"

If this is what you meant then I agree with you. The question is, are
there things you can learn or discover which better support the goal
of developing software for your customer, for the initial launch, as
well as maintenance and ongoing development. If so, what are the ones
that can be learned and how-to they apply? And for the times you
discover be sure to share with the rest of us. =)

Finally IMO mocking and interaction-based testing has a place in
software development and when used properly it adds value to the
software development process.

--
Zach Dennis
http://www.continuousthinking.com


More information about the rspec-users mailing list