[Rubytests-devel] Known problems ...

Johan Holmberg holmberg at iar.se
Wed Dec 1 13:56:47 EST 2004


Thanks for the well-written reply. I can agree with most of what you 
said. Some further thoughts:

In C-Ruby I found a function "rb_notimplement". I assume that it is 
intended for cases like the "ino" method we talked about. On Windows 
we already have:

     $ ruby -e 'File.lchmod("foo")'
     -e:1:in `lchmod': The lchmod() function is unimplemented on \
                       this machine (NotImplementedError)
             from -e:1
     $ ruby -e 'Dir.chroot("foo")'
     -e:1:in `chroot': The chroot() function is unimplemented on \
                       this machine (NotImplementedError)
             from -e:1


Throwing the same exception for the File::Stat#ino method would 
seems natural to me (I'll try to ask about that on "ruby-core"). 
Wouldn't this be the right thing for the methods you mentioned as 
missing in JRuby too ?

You wrote:

> As far as testing, I think we should write tests according to how a
> function is *supposed* to work (expected to work, documented to work,
> common sense, etc), rather than testing that *how* it works is
> correct. Test-driven development, and etc...write your tests [...]

I _would like_ to fully agree, but I can see some minor "problems":

The Rubicon tests can be used in at least two different ways: A) to 
verify an implementation against a specification (how it is supposed 
to work), and B) to track changes to the language.

I guess that you are most interested in A). For me it has been B) 
that has been in the foreground. I think it would be nice if Rubicon 
could accommodate both "use cases".

So, when I run Rubicon I would like to have these inputs:

     - a Ruby interpreter to test
     - a language version to consider correct.

A problem here is that there is no formal specification of Ruby. The 
only thing we have are different versions of C-Ruby that implicitly 
defines the language at different times.

As output I would like to have:

     - a list of cases where the interpreter under test doesn't
       follow the specified language version

If this list gets long, there is a risk that one error _shadows_ 
another error. Currently most of Rubicon have _one_ test-method for 
each method tested. If some test-method have many "asserts", and we 
get an error, all we can say is that it has _at least_ one case 
where it doesn't behave as expected. But I would often like to know 
if there are _one_ or lets say _five_ errors in the method. It makes 
a difference in the B) use case above.

I have been trying to figure out a way to handle both the A) and B) 
use cases, but I don't have any solution yet.

One thought I've had, is to use the "known_problem" method for all 
unresolved cases, but make it work in two different 
ways:

1) do *nothing special* by default. The enclosed code should work
    as if "known_problem" wasn't there at all, i.e. a failed "assert"
    will lead to a real failure of the testcase.

2) make "known_problem" sensitive to an environment variable
    (or perhaps command line option), and if it is set reclassify the
    error to a "known problem" (as it is done today).

This would avoid missing errors by default (since all tests are 
really executed). But it would still make it possible to "accept" 
that some tests fail, and still go on an find further errors.

Another thought I've had is to split test-methods that have 
problems. Maybe one test-method for each method tested is a
_too big_ unit. I've felt uncomfortable with slitting testcase, 
since I've felt the splitting would be governed by accidental 
circumstances. But now that I'm thinking about it again, maybe this 
is the right way to go. It would probably make it easier to handle 
language changes (some _test methods_ would be allowed/expected to 
fail for a particular language version).

Am I thinking in the wrong direction ?
It's not easy sort all things out at the same time ....

My approach so far has been to take one step at a time, 
and fix easy things first. I have been hoping that some good 
solution to the trickier issues would pop-up later ...

/Johan Holmberg



More information about the Rubytests-devel mailing list