[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: binary vs non-binary ports

From: Per Bothner <per@xxxxxxxxxxx>
Subject: binary vs non-binary ports
Date: Wed, 15 Sep 2004 21:51:44 -0700

>  From the draft:
>  > Some Schemes may wish to distinguish between binary and non-binary
>  > ports as in Common-Lisp. As these can be layered on top of the
>  > current ports this may better be relegated to a separate SRFI.
> Huh?  This is backwards.  The current ports are character ports.
> As such they are layered on top of byte ports.  I.e. non-binary
> ports are layered on top of binary ports.

Certainly there are implementations that inherently needs to
distinguish character and binary ports, so I see Per's point.
I can think of two resolutions.

(1) changing the phrase in the draft to mention that:
- Some implementations inherently need to distinguish character
  and binary ports.
- If the port doesn't support the requested operation, an exception
  is raised (already mentioned in the draft).
- The API to distinguish character/binary ports is beyond this srfi

(2) including primitive predicates, something like port-binary-io-capable?,
    into this srfi, so that a portable program can be written.

> It makes no sense to mix character and binary I/O on the same port.
> Anyone who tries it is in a state of sin.

I know one instance that I need to mix both.

There are Scheme source code around that their comments are
written in non us-ascii charsets, although their code part
is in us-ascii.   While dealing with such sources, it is very
annoying that the input port throws an "invalid multibyte
sequence" exception when the reader is consuming the comment
string.  Using binary I/O in skipping comment avoids this
situation.   It is not nice, but such robustness is mandatory
if you're in a community that exchanges source code with
comments in various encodings...