[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Various opinions



On the one hand, I really want to be able to count on
Unicode in scheme, and write portable code using it.

On the other hand, I don't want the standard to specify
a particular form of Unicode support that sucks, or which
is not compliant to the Unicode standard itself.

On the gripping hand, I don't want to lock scheme out of
embedded applications with simplistic 8-bit or 7-bit
character sets and no space for the big tables that
Unicode requires.

And on a foot, the issue of whether identifiers (symbols)
are case-sensitive *DEFINITELY* affects whether code is
portable, and it would be murdurously hard to specify
case-insensitive symbols if we want to leave open the
option of fully supporting Unicode.

Maybe the right support is for the standard to remain silent
on the issue of character set, while relaxing constraints
that make full unicode support impossible or murderously
hard, and have an adjunct or appendix that specifies a
*minimal* amount of the form of optional unicode support.

External representations for single-codepoint unicode
characters and strings and symbols containing them, I think
we can agree that everyone who intends to support unicode
needs and that no particular implementation of unicode support
will be unable to use.  Specifying binary representation or
number of codepoints per character I don't think we do need,
because it overspecifys unicode support enough to force bad
or unwanted choices on implementors.

IOW, let's specify enough that people can write some code
to the standard and so that things every unicode-supporting
scheme must have in common will be portable. But not so much
that implementations can't still do things in different ways
and find out from experience what actually works.

				Bear