This page is part of the web mail archives of SRFI 25 from before July 7th, 2015. The new archives for SRFI 25 contain all messages, not just those from before July 7th, 2015.
Brad Lucier writes: > This is a side comment about this SRFI. That reminds me - I've forgotten to post the revision. I'll try not to forget it again. > I'm working with fMRI data, which consists of a time series of > volumes, of slices, of rows, of pixels, of complex numbers. Just > what an array SRFI should help with. [...] > So I looked at Alan Bawden's code and this SRFI. And it turns out > that neither is at a high enough abstraction level to really > simplify my life. They are essentially the same. I think I have a good answer to your concern about working at the level of individual elements. The answer is that these operations are just the primitives that should be used to write the higher level operations. I have written versions of transpose, map, append along any dimension, and some others. No reduce or scan yet. (I got these names from Iverson's Turing lecture.) I have nothing on arrays with restricted element types like floats. That may be a weakness in the current proposal, but it seems to be that there would be problems of specification. Should a map over a vector of floats always produce a vector of floats, for example? > Both work too much at the "move a word around" level of programming. > Both assume that there are underlying arrays that are mutable, you > can set! elements of the arrays, the underlying arrays are generic > containers (vectors, not f64vectors or f32vectors, ...), etc. [...] > And I see now that a multi-dimensional version of abstract-vector is > what I need for my current purposes. It's unfortunate that the > current SRFI can't fulfill my needs as usefully, but that's the way > things go. Would the inclusion of map and reduce and the like change your perception, or is support for restricted vector types essential? And if the latter is essential, would an implementation using just that type be sufficient, or is essential to be able to mix and match different type backing vectors? I do not mean to include higher level operations in the specification. The point is just that the primitives are meant to be suitable for their implementation, and I think they are, save things like redundant error checking. > To make things precise, I've included my old code at the end of this > message. It uses the define-structure extension of Gambit-C and a > few Gambit-C declarations to speed things up. As you can see, there > are no low-level (word level) operations, but for the most part, you > don't really need them. Note also abstract-vector->vector, which > fixes a concrete representation of an abstract-vector so that one > can use a fixed, precomputed, representation of an abstract vector. I think element level access is needed to implement higher levels. It would be interesting to try to rework everything to do only the index arithmetic and not hide the backing vector, but I'm sure I don't have the time now. The next question will be whether to finalize or withdraw. -- Jussi