[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: perhaps I've missed something ...
Because they are related concepts? Because most other programming
languages use the same syntactic construction for both? Because
many people seem to find that a natural way of doing things?
So if you were with 100 other people and they all jumped off the cliff, you
would jump, too?
I have always hoped that Schemers developed things because they were right
not because the majority did it. Here "right" means there is supportive
pragmatic evidence and an underlying theory. I don't understand what
natural means. Give me a definition.
I also believe that we should study both the theory and the practice of
languages. That's what distinguishes us from them. And there were only
"them" I wouldn't be in CS.
In a nutshell: W/o set! we can think of Scheme as a language that consists
of a universe of data, with N disjoint, recognizable subsets, and operations
on this universe of data. Some operations create data: cons, open-input-file,
lambda. Others extract data from data. Yet others mutate some datum.
Programming means to compose operations on data. We can develop a rational
theory of program design, based on thinking about data and operations. (And
we can teach it that way, which matters to me.) No, we're not finished
developing this theory, but bits and pieces were available when Burge wrote
his book, it's between the lines of the Little Schemer, it's even in
SICP. And Patterns only rediscovers what we knew back then and what we need
to say better. Read the book and weep -- because Schemers and FPers didn't
say it like this earlier. Still, I believe that the very fact that software
archaeology (= patterns) and theory agree is very nice.
Set! cannot be explained as an operation on data. Only its *implementation*
can be explained that way. But we know that we shouldn't punch thru
abstraction barriers in uncontrolled ways. It's sad enough that Perl is done
that way. And Python. The only way to explain set! without this problem is
to refer to program text (the lambda binding and the concrete application
that discharges it) and organization.
What's in ML? Mutation on refs only. (Too little.)
What's in Haskell plus monads? Data mutation. It's just awkward.
What's in Java? Mutation on objects. An assignment to a private variable is
only an abbreviation for this.x = foo; Assignments to static, but even the
dinoasaurs warn you about statics, though they don't understand why.
The history of programming languages moves languages away from machine
concepts and to abstractions. The key idea of abstraction is data
abstraction, that is, trying to understand all computation in terms of data,
which is the computational expression of information, and operations on
data. FP and Scheme include functions in the set of data, which makes the
world more uniform.
"A programming language is the sum of its datatypes."
The other key idea of abstraction is the notion of types and operations on
types (which is one level up from the world of data). But I won't pursue
The easiest example to understand how the study of theory improves practice
is to study the notion of assignment in ML over the past 20 years. To add
polymorphic references and assignment, appeared easy. It broke safety. By
studying the theory, people found a way to get polymorphic refs and safety.
Unfortunately the proof of the theorem was wrong, and so was the
implementation. So Tofte developed a (un)natural semantics (sorry, I can't
help but smile at the word "natural" (over)used in CS) and an awakward
theory and correct proof. SML/NJ implemented it with a few extensions. It
was impossible to model this stuff and when Greiner did do so, they promptly
broke the system. More and more practical people put more and more wrinkles
on top until Wright went back to the Tofte proof, said "hey, if you change
the axiom like this it's elegant and easy to prove and implement" and showed
that a 100,000 lines of hairy ML actually satisfied this theory. Without
someone who understood the theory and the practice, MLers would still be
looking for the rigt way to do it.
Engineers have accepted mathematics as a tool for years. Why don't we?