[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
After a very quick look at SRFI-19 I have the following comments:
1) The interface assumes that there are exactly 86400 seconds in a
day. This is not true. There can be exactly 86399, 86400 and 86401
seconds in a day, due to "leap seconds". Note that the point in time
where the leap seconds are added (or removed) is under the control of
a committee and there is no deterministic algorithm to determine in
advance where they occur (it all depends on the speed of rotation of
the earth which varies slightly, up and down, due to various factor).
So the "seconds" component should be in the range 0-60 inclusive.
According to "man ctime" under linux, the seconds field can go up to
61, but I don't understand why that high. You should probably
research this a little more. Here are some interesting sources of
2) To be consistent with Scheme naming convention and practice, you
should rename "get-universal-time" to "current-time",
"current-universal-time", or "current-date".
3) Why limit the resolution of the time datatype to 1 second?
The resolution should be implementation dependent, and if you insist,
of at least 1 second resolution. This is so that an implementation
can use the time datatype for finer resolution timing (such
as a "(thread-sleep! wakeup-time)" procedure I am considering for
my thread SRFI). Otherwise, to do finer resolution timing you need
another time datatype, and this is rather clumsy.
4) I don't like the fact that the "year" component has a special
meaning between 0 and 99:
Year, an integer representing the year C.E. (i.e., A.D.). If the
integer is between 0 and 99, however, it represents the current
year + the year (if it is less than 50) or the current year - the
year (if it is greater than or equal to 50).
This is because the meaning of a date created with
encode-universal-time will depend on the time when that procedure was
called, and since there is no way to know precisely at what time it
was called there are (extreme) situations where the time meant is
not clear (i.e. meaning of year 50 at the turn of the year 2000 plus
or minus a few nanoseconds may be 1950 or 2050).
5) Instead of the "multiple value" interface of decode-universal-time
I prefer single-value accessors of the kind:
and also (universal-time-second ut) would return a real, possibly
6) Why use 1-1-1900 as a base, why not 1-1-1970 which is the norm
under UNIX? I know this is a convention, but a closer base date gives
more implementation leaway... for example a 2 fixnum representation
(32 bit words, 3 tag bits) counting nanoseconds since 1-1-1900 will
wraparound in 1991 but if you count from 1-1-1970 it will wraparound
in 2061, which is probably reasonable for many applications. And if
you insist on 1-1-1900, please consider 1-1-2000 at least.
7) The time datatype should be abstract, i.e. it shouldn't be
a number. There should be conversion functions between time and
seconds since the base time:
Note again that the result of universal-time->seconds and the
argument of seconds->universal-time should be a real, possibly
For your information, Gambit-C represents the time datatype internally
as a flonum, and counts the number of microseconds elapsed since the
base date (so the value 15.0 means 15 microseconds since the turn
of the year 1970). The advantage of this representation is that
1) computing with time (such as time comparison and difference)
can be done very fast (no bignum operation is required)
2) time is represented with microsecond resolution exactly (by an
inexact integer) over the range: base date +/- 285 years (because
there are 53 bits of mantissa)
3) the resolution degrades gradually as the point in time get further
from the base date (but frankly I don't expect Scheme to survive
for 285 years, although I am sure you'll still be able to buy Cobol