This page is part of the web mail archives of SRFI 75 from before July 7th, 2015. The new archives for SRFI 75 contain all messages, not just those from before July 7th, 2015.
On Wed, 27 Jul 2005, Tom Emerson wrote: >Per Bothner writes: >> If you have the luxury of reading your entire file into memory (and in >> the process expanding its size by a good bit) you can of course do all >> kinds of processing and index-building. > >I have text files containing 100MB worth of UTF-8 encoded text with >character offsets in supplemental files. This happens regularly in >corpus linguistics. Uh, seconded. Same reason (corpus linguistics). There is no practical way to keep track of "marks" for hundreds of thousands (or millions) of interlinear annotations, and be able to serialize the string and read it back with marks intact. Numeric offsets do a better, more natural job. Bear