[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How about dropping semi-variable-arity?

This page is part of the web mail archives of SRFI 71 from before July 7th, 2015. The new archives for SRFI 71 contain all messages, not just those from before July 7th, 2015.




-al wrote:
> However, if that is provided, are you wedded to the idea that lambda's
> feature of semi-variable-arity argument lists like (a b . c) must also
> be supported?

>
> If you're thinking you want that so that you can use a destructuring
> idiom like:
>
>  (let* ((foo '(1 2 3 4 5))
>         ((values x y . z) (unlist foo)))
>    (list x y (apply + z)))
>  => (1 2 12)
>
> I would like to point out that that requires Z to be bound to a list
> of three new pairs and *not* to the cddr of FOO, so this really isn't
> an appropriate idiom for the typical operation of extracting the first
> few elements of a list and the remainder.


Well observed.

> As has been suggested, the
> appropriate construct for that would be a pattern-match facility.

Not necessarily. More deconstructors for lists of a certain length would cover most
cases already, e.g. unccons, uncccons, unccccons, ....

Maybe one can even come up with a naming convention (some sort of Van Dijk
context-free language) encoding more complicated binary trees into the name
of the deconstructor; similar to c(a|d)^*r specifying a path from the root in a binary tree.
Pattern matching is of course more flexible, but also much more expensive.

In fact, my idea concerning the "destructor style" of programming is that this SRFI
could be a stepping stone towards patterns. The severe limitation of pattern
matching in most languages is that you can only decompose concrete
data structures. Often you want to decompose an abstract data structure, i.e.
I don't want to know how the priority queue is represented, I just want to extract
the minimal element. Now if you allow this, and the language has side-effects,
then user-defined decomposition operations will shred your clean semantics,
and maybe also limit the compiler's options for making good code. This does
not immediately imply, however, that it's a bad idea. Scheme is full of procedures
that behave like functions but are implemented using side-effects.

Now the mechanism in this SRFI with uncons/unlist/unvector is just an inspiration
for writing application-specific deconstructors. I would be glad to see someone
take up the challenge of defining a good pattern matching mechanism that can
apply application-specific deconstructors in a convenient way.

> If you drop support for semi-variable-arity, and you drop support for
> zero-value clauses (which I will discuss more below) then you can dump
> the VALUES keyword entirely, because (values . x) clauses can be more
> simply and clearly written -- without the triple-open-paren -- using
> values->list:

True. But I am not yet convinced that getting rid of the VALUES keyword is worth
dropping semi-variable arity, or zero-valued clauses. Three parens is bad, but
is it that bad?

> With respect to zero-valued clauses: I'm not sure it really makes
> sense to allow "binding clauses" that do not actually create any
> bindings.  In what situations are you thinking these would be useful?

If you look at imperatively written algorithms, e.g. often found in numerical analysis,
you see variable bindings and statements intermixed at a high rate, due to short-lived
temporary variables. This is not specific for Scheme but for the application domain.

I wrote quite a bit of Ocaml code lately and I quickly adopted the indentation
convention of not indenting by scope but by control structure. Example:

        let x1 = foo1 x
        in  foo2 x1;
        let x3 = foo3 x1
        and x4 = foo4 x1
        in  foo5 x1 x3 x4;;

If this stuff goes on for 30 lines and more then a more conventional indentation
makes you want to tilt you monitor 45 degrees clockwise...

The nature of this sort of code is that there is a linear sequence of things
but the scopes flash in and out like crazy.

Of course, the usual reflex of programmers is to forget about the actual
data flow and write one giant LET*. In Scheme everything has a value and
hence you just write a binding (some-dummy-variable <no-values>)
when you need to execute a statement. The only thing the ((values) <_expression_>)
notation accomplishes is getting rid of the dummy variable (and the notation for
that is not even very good).

Now what you propose is actually improving the support for imperative
programs. No doubt, I am in favor of that. (And it annoyed me more than
once that (begin) is not portable.) However, I am not sure if this SRFI is
the right place to attempt this. Fiddling with LET is touchy already.

Instead of modifying BEGIN, I am more in favor of adding another production:

<binding spec> --> (begin <_expression_>*),

which means nothing is bound and the expressions are executed
for side-effects, throwing away whatever values they produce if any.

Good style, or not, it allows to write

(let* ((y (foo x))
       (begin (display y))
       (z (bar-trashing-argument! y)))
  z)

Two downsides: a) Looks funny. b) You cannot bind a variable named 'begin'.
Any other? If this is all, it could be worth it.

Sebastian.