[rspec-users] Mocks? Really?

Francis Hwang sera at fhwang.net
Sun Dec 30 15:24:10 EST 2007

On Dec 30, 2007, at 1:42 AM, Zach Dennis wrote:

> On Dec 29, 2007 5:46 PM, Francis Hwang <sera at fhwang.net> wrote:
>> I don't know if anyone else will find this thought useful, but:
>> I think different programmers have different situations, and they
>> often force different sorts of priorities. I feel like a lot of the
>> talk about mocking -- particularly as it hedges into discussions of
>> modeling, design as part of the spec-writing process, LoD, etc --
>> implicitly assumes you want to spend a certain percentage of your
>> work-week delineating a sensible class design for your application,
>> and embedding those design ideas into your specs.
> The fact is that you are going to spend time on designing, testing and
> implementing anyways. It is a natural part of software
> development. You cannot develop software without doing these
> things. The challenge is to do it in a way that better supports the
> initial development of a project as well as maintenance and continued
> development.

I certainly didn't mean to imply that you shouldn't do any design or  
testing. If I had to guess at my coding style versus the average  
RSpec user, based on what's been said in this thread, I'd guess that  
I do about as much writing of tests/specs, and probably spend less  
time designing. But there is certainly such a thing as overdesigning,  
as well, right? I'm always trying to find the right amount, and I  
suspect that "the right amount" can vary somewhat in context.

>> At the risk of
>> sounding like a cowboy coder I'd like to suggest that some situations
>> actually call for more tolerance of chaos than others.
>> I can think of a few forces that might imply this:
>> - Team size. A bigger team means the code's design has to be more
>> explicit, because of the limits of implicity knowledge team members
>> can get from one another through everyday conversation, etc.
> This argument doesn't pan out. First, it's highly unlikely that
> the same developers are on a project for the full lifetime of the
> project. Second, this fails to account for the negative impact of bad
> code and design. The negative impact includes the time it takes to
> understand the bad design, find/fix obscure bugs and to extend with
> new features or changing to existing ones.

Again, I did not say "if you have a small team you don't have to do  
any design at all." I said that perhaps if you have a much smaller  
team you can spend a little less time on design, because implicit  
knowledge is much more effectively communicated.

Are you disagreeing with this point? Are you saying that two software  
projects, one with four developers and the other with forty, will  
ideally spend the exact same percentage of time thinking about  
modeling, designing, etc.?

>> - How quickly the business needs change. Designs for medical imaging
>> software are likely to change less quickly than those of a consumer-
>> facing website, which means you might have more or less time to tease
>> out the forces that would lead you to an optimal design.
> This doesn't pan out either. Business needs also change at infrequent
> intervals. Company mergers, new or updated policies, new or updated
> laws, the new CEO wanting something, etc are things that don't happen
> every day, but when they do happen it can have a big impact. The goal
> of good program design isn't to add unnecessary complexity which
> accounts for these.

I wasn't saying that some businesses needs never change. The point I  
was trying to make is in that some sorts of businesses and companies,  
change happens more often, and can be expected to happen more often  
based on past experience.

>> In my case: I work in an small team (4 Rails programmers) making
>> consumer-facing websites, so the team is small and the business needs
>> can turn on a dime. From having been in such an environment for
>> years, I feel like I've learned to write code that is just chaotic
>> enough and yet still works. When I say "just chaotic enough", I mean
>> not prematurely modeling problems I don't have the time to fully
>> understand, but still giving the code enough structure and tests that
>> 1) stupid bugs don't happen and 2) I can easily restructure the code
>> when the time seems right.
> The challenge is to write code that is not chaotic, and to learn to do
> it in a way that allows the code to be more meaningful and that  
> enhances
> your ability to develop software rather then hinder it.

I wonder if part of the disconnect here depends on terminology. Some  
might see "chaos" as a negative term; I don't. There are plenty of  
highly chaotic, functional systems, both man-made and natural.  
Ecosystems, for example, are chaotic: They have an order that is  
implicit through the collective actions of all their agents. But that  
order is difficult to understand, since it's not really written down.

I guess that's what I'm trying to express when applying the word  
"chaos" to code: It functions for now, but perhaps the way it works  
isn't as expressive as it could be for a newcomer coming to the code.

Another thing I'd express is that I find a codebase to assymetrical,  
in terms of how much specification each individual piece needs. I  
find it surprising, for example, when people want to test their Rails  
views in isolation. I write plenty of tests when I'm working, but I  
try to have a sense of which pieces of code require a more full  
treatment. I'll extensively test code when the cost/benefit ratio  
makes sense to me, trying to think about factors such as:

- how hard is it to write the test?
- how hard is the code, and how many varied edge cases are there that  
I should write down?
- are there unusual cases that I can think of now, that should be  
embodied in a test?

>> In such environment, mocking simply gets in my way. If I'm writing,
>> say, a complex sequence of steps involving the posting of a form,
>> various validations, an email getting sent, a link getting clicked,
>> and changes being made in the database, I really don't want to also
>> have to write a series of mocks delineating every underlying call
>> those various controllers are making. At the time I'm writing the
>> spec, I simply don't understand the problem well enough to write good
>> lines about what should be mocked where. In a matter of hours or days
>> I'll probably end up rewriting all of that stuff, and I'd rather not
>> have it in my way. We talk about production code having a maintenance
>> cost: Spec code has a maintenance cost as well. If I can get the same
>> level of logical testing with specs and half the code, by leaving out
>> mocking definitions, then that's what I'm going to do.
> I think we should make a distinction. In my head when you need to  
> write code
> and explore so you can understand what is needed in order to solve  
> a problem I
> call that a "spike".
> I don't test spikes. They are an exploratory task which help me
> understand what I need to do. When I understand what I need to do I
> test drive my development. Now different rules apply for when you use
> mocks. In previous posts in this thread I pointed out that I tend to
> use a branch/leaf node object guideline to determine where I use mocks
> and when I don't.

My understanding of a spike is to write code that explores a problem  
that you aren't certain is solvable at all, given a certain set of  
constraints. That's not the lack of understanding I'm talking about:  
I'm more addressing code that I know is easily writeable, but there  
are a number of issues regarding application design that I haven't  
worked out yet. I'd rather write a test that encapsulates only the  
external touchpoints -- submit a form, receive an email, click on the  
link in the email -- and leave any deeper design decisions to a few  
minutes later, when I actually begin implementing that interaction.

There's another kind of "not understanding" that's also relevant  
here: A "not understanding" due to the fact that you don't have all  
the relevant information, and you can't get it all now. For example:  
You release the very first iteration of a website feature on Monday,  
knowing full well that the feature's not completed. But the reason  
you release it is because on Wednesday you want to collect user data  
regarding this feature, which will help you and the company make  
business decisions about where the feature should go next.

>> As an analogy: I live in New York, and I've learned to have semi-
>> compulsive cleaning habits from living in such small places. When you
>> have a tiny room, you notice clutter much more. Then, a few years
>> ago, I moved to a much bigger apartment (though "much bigger" is
>> relative to NYC, of course). At first, I was cleaning just as much,
>> but then I realized that I simply didn't need to. Now sometimes I
>> just leave clutter around, on my bedside table or my kitchen counter.
>> I don't need to spend all my life neatening up. And if I do lose
>> something, I may not find it instantly, but I can spend a little
>> while and look for it. It's got to be somewhere in my apartment, and
>> the whole thing's not even that big.
> Two things about this bothers me. One, this implies that from the
> get-go it is ok to leave crap around an application code base.

Well, not to belabor the analogy, but: It's not "crap". If it's in my  
apartment, I own it for a reason. I may not use it all the time, it  
may not be the most important thing in my life, but apparently I need  
it once in a while or else I'd throw it away. I may not spend all my  
time trying to find the optimal place to put it, but that doesn't  
mean I don't value it. I just might value it less than other things  
in my apartment.

> Two,
> this builds on the concept of a "optimal" design; by way of
> spending your life neatening up.
> I am going to rewrite your analogy in a way that changes the meaning
> as I read it, but hopefully conveys what you wanted to get across:
> "
> I do not want to spend the life of a project refactoring a code base
> to perfection for the sake of idealogical views on what code should
> be. I want to develop a running
> program for my customer. And where I find the ideals clashing with
> that goal I will abandon the ideals. Knowing this, parts of my
> application may be clutter or imperfect, but I am ok with this and so
> is my customer -- he/she has a running application.
> "

That's probably close to what I'm trying to say. But in a broader,  
philosophical sense, I'm okay with the fact that my code is never  
going to be perfect. Not at this job, not at any other job. In fact I  
don't know if I've ever met anybody who gets to write perfect code.  
We write code in the real world, and the real world's far from  
perfect. I suppose Wabi Sabi comes into play here.

To bring it back to mocks: It seems to that mocks might play a role  
in your specs if you were highly focused on the design and  
interaction of classes in isolation from all other classes, but  
understanding that isolation involves having done a decent amount of  
design work -- though more in some cases than in others. But if you  
were living with code that was more chaotic/amorphous/what-have-you,  
prematurely embedded such design assumptions into your specs might do  
more harm than good.

I do, incidentally, use mocks extensively in a lot of code, but only  
in highly focused cases where simulating state of an external  
resource (filesystem, external login service) seems extremely  
important. Of course, that usage of mocks is very different from  
what's recommended as the default w/ RSpec.

Francis Hwang

More information about the rspec-users mailing list