[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: duplication of SRFIs

This page is part of the web mail archives of SRFI 78 from before July 7th, 2015. The new archives for SRFI 78 contain all messages, not just those from before July 7th, 2015.

Sebastian Egner wrote:

Per Bothner wrote:
 >  > One of the reasons for SRFI-64 is that I think the Scheme world
 > needs a standard for test-suites: Specifically,there should be can
 > expectation that SRFIs should come with a portable suitesuite.

SRFIs come with whatever the authors choose to include, within the limits
of the SRFI process, and nothing more. Having any other expectation is
self-deception or wishful thinking.

Ok - SRFIs authors should be "encouraged" to include a test-suite.
Having a standard test-suite format makes that both easier and
more useful.

If the authors of a SRFI choose to include some testing code for their
reference implementation that is great, but should by no means be
'expected' by you or anybody else.

Any non-toy application or library needs systematic testing.  The
best way to do that is with an automaticly-executable testsuite.
The SRFI process is quite loose and informal, but we should at
least *encourage* test suites.

>It should be even less expected
that the authors of a SRFI choose SRFI 64 (or SRFI 78 or any other SRFI)
as their framework for testing, just because you put it forward as the
"standard for test-suites."

Well, a "standard for test-suites" would be a good thing for Scheme.
Ideally there would be consensus on a standard.  It needs to be simple
to write test suites, it should satisfy the most common API
(non-interactive non-graphical) testing needs, and at least the basic
functionality must be trivially portable to on most/all Scheme
implementations.  Otherwise I'm not too wedded to anything specific,
but I think the SRFI-64 satisfies those needs, and I've tried to
listen to other Scheme implementors.

SRFI-64 does provide more "hooks"/features than minimally needed.
If that is a concern, I'm open to splitting it up into a core API
for writing basic testsuites, and an extended API for pluggable
test-runners, control over which tests get run etc.  There is an
issue though about conditionalizing tests and noting that tests
are expected to fail - are those basic or extended features?

When SRFI 64 came out, I took it as a first step towards an 'industrial
strength' testing environment for Scheme. This is not the intention with
SRFI 78, never was, and never will be. I did not even occur to me that
you intend to put SRFI 64 forward as 'the one and only' testing mechanism.

Not the "the one and only" testing mechanism, obviously.  But it would
be good to have a standard way to write *test suites*.  Not the "one
and only" of course, but the default which one could expect to
be available.

Note I'm less concerned about standardizing "mechanism" than I am about
stdandardizing a way to write test suites.

Many people are consistently lazy. Tests get written when
it is either not a burden, or when it is undeniably necessary.

Well, yes.  But quality software development does seem to require
test suites - at the very least regression tests.  I.e. I'm not
expecting exhaustive tests that check for all features of an any.
But when you implement a feature, you should write a minimal test
suite, even it contains just a simple "hello world"-type test.
It is then easier to add to to the testsuite as you develop or
maintain it.

Having a standard API makes this easier.

My main problem with SRFI 64 is that I do not want to learn about all the
nice little things you define like XPASS etc.

Well, perhaps we could leave that out of the core testing API, and
keep it in the extended API.  Or we need a simple tutorial.

> The only thing that I need from a testing
framework is:
CHECK and CHECK-EC as specified in SRFI 78

(check <expr>  =>          <expected>)
seems to be the same as:
(test-equal <expr> <expected>)

(check <expr> <equal?>
is the same as using test-eq or test-eqv when <equal?> is eq? or
eqv?, respectively, or more generally:
(test-assert (<equal?> <expr> <expected>))

I'm not opposed to the => syntax; my main concern is getting consensus,
and I think some people will prefer a more traditional function syntax.

I also don't feel strongly about "test-" vs "check-" as long as there
is agreement on a single prefix for the testing API.

The problem with check-ec is that it pre-supposes another SRFI which
is not that widely implemented.  That isn't in itself a problem, if
there is a portable implementation.  I'm more concerned about design:
you're complaining about having to understand XPASS.  I'm more concerned
about people having to learn and understand SRFI-42-style comprehensions
in order to read, write, and update test suites.  This seems a bit
premature, until SRFI-42-style comprehensions become more established.
(I also note that Olin Shivers is working on a more general loop macro
- see ICFP '05.)

I'm not sure:
(check-ec <qualifier>^* <expr>  =>          <expected> (<argument>^*))
is enough of an improvement over:
(do-ec <qualifier>^
  (test-equal (format "~s" <argument>) <expr> <expected>))
The main advantage (besides being slightly shorter) is that the
check-ec stops after the first failed check.  That's useful but a
global "exit after first failure" option may be even more useful.

> and a way to print the report
'here and now' without any setup or any concepts like a test runners etc.

SRFI-64 does that.  Just a test-begin and test-end - look at the intial
example.  That's a complete executable test suite.

Controlling the verbosity of output, switching off the tests, and dealing with test runners explicitly, groups of tests etc. is all more advanced stuff that I
want to be able to bring in when it becomes unavoidable.

Well, you do have some non-basic controls for that: check-set-mode!,
check-passed?, etc.  Where you draw the line between simple and
advanced is not obvious.  Those functions seems to be more for
controlling test execution (i.e. the test runner) rather than writing
test suites.

One other thing I
noticed is that it is useful to print test results immediately when they become available one by one, this gives you a way of tracking things up to a crash.

The default "simple runner" in SRFI-64 does that.

Do you see any way of incorporating this stuff into SRFI 64? If so, how would
you go about it? If not, what would be the problem?

The main issues I see are:
(1) names and syntax: i.e. test vs check and the => syntax.
Here I feel: whichever we can get most Scheme and library
implementors to go along with.

(2) Whether to split SRFI-64 into a core and an extended API.
An advantage of a split that people who just want to just
write/maintain test suites can read a smaller document rather
than be intimidated by a more complex document.  But that should
perhaps be dealt with a separate "how to write simple test cases"
tutorial document.

(3) begin/end-wrappers.  Your check-report is handled by my test-end,
except that test-end supports nesting.  (That is probably not a core
feature.)  Having a test-begin is useful and not burdensome; it is
similar to check-reset!, but always required at the beginning.

(4) check-ec (or test-ec).  I'm not sure that belongs in SRFI-64 (or
a "core" framework in general).  However, it makes perfect sense to
have that as a separate SRFI, as long as it is stylistically compatible
(i.e. consistency about syntax), and can be implemented using SRFI-42
plus the core API.

(5) check-set-mode! is basically a function for tweaking the
test-runner.  It should easy enough to add the desired functionality.
My implementation by default writes a detailed log to a file, and a
quick summary with failing tests to the standard output port.  It might
be reasonable to only write the first failure to stadard output.
	--Per Bothner
per@xxxxxxxxxxx   http://per.bothner.com/