Sweet-expressions (t-expressions)
Alan Manuel K. Gloria
This SRFI is currently in “draft” status. To see an explanation of
each status that a SRFI can hold, see here.
To provide input on this SRFI, please
mail to
<srfi minus 110 at srfi dot schemers dot org>
. See
instructions here to
subscribe to the list. You can access previous messages via
the archive of the mailing list.
This SRFI contains all the required sections, including an abstract, rationale, specification, design rationale, and reference implementation.
This SRFI describes a new extended syntax for Scheme, called sweet-expressions (t-expressions), that has the same descriptive power as s-expressions but is designed to be easier for humans to read. The sweet-expression syntax enables the use of syntactically-meaningful indentation to group expressions (similar to Python), and it builds on the infix and traditional function notation defined in SRFI-105 (curly-infix-expressions). Unlike nearly all past efforts to improve s-expression readability, sweet-expressions are general (the notation is independent from any underlying semantic) and homoiconic (the underlying data structure is clear from the syntax). Sweet-expressions can be used both for program and data input. This notation was developed by the “Readable Lisp S-expressions Project”.
Sweet-expressions can be considered simply a set of some additional abbreviations. Sweet-expressions and traditionally formatted s-expressions can be freely mixed, allowing the developer to easily transition and maximize readability when laying out code. For example, a sweet-expression reader would accept either the sweet-expression or s-expression format shown below. Here is an example:
sweet-expression | s-expression |
---|---|
define factorial(n) if {n <= 1} 1 {n * factorial{n - 1}} |
(define (factorial n) (if (<= n 1) 1 (* n (factorial (- n 1))))) |
SRFI-49 (Indentation-sensitive syntax) (superceded by this SRFI), SRFI-105 (Curly-infix-expressions) (incorporated by this SRFI), SRFI-22 (Running Scheme Scripts on Unix) (some interactions), SRFI-30 (Nested Multi-line comments) (some interactions), and SRFI-62 (S-expression comments) (some interactions)
Many software developers find Lisp s-expression notation inconvenient and unpleasant to read. In fact, the large number of parentheses required by traditional Lisp s-expression syntax is the butt of many jokes in the software development community. The Jargon File says that Lisp is “mythically from ‘Lots of Irritating Superfluous Parentheses’”. Linus Torvalds commented about some parentheses-rich C code, “don’t ask me about the extraneous parenthesis. I bet some LISP programmer felt alone and decided to make it a bit more homey.” Larry Wall, the creator of Perl, says that, “Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in. (Other than that, it’s quite a nice language.)”. Shriram Krishnamurthi says, “Racket [(a Scheme implementation)] has an excellent language design, a great implementation, a superb programming environment, and terrific tools. Mainstream adoption will, however, always be curtailed by the syntax. Racket could benefit from [reducing] the layers of parenthetical adipose that [needlessly] engird it.”
Even Lisp advocate Paul Graham says, regarding Lisp syntax, “A more serious problem [in Lisp] is the diffuseness of prefix notation... We can get rid of (or make optional) a lot of parentheses by making indentation significant. That’s how programmers read code anyway: when indentation says one thing and delimiters say another, we go by the indentation. Treating indentation as significant would eliminate this common source of bugs as well as making programs shorter. Sometimes infix syntax is easier to read. This is especially true for math expressions. I’ve used Lisp my whole programming life and I still don’t find prefix math expressions natural... I don’t think we should be religiously opposed to introducing syntax into Lisp, as long as it translates in a well-understood way into underlying s-expressions. There is already a good deal of syntax in Lisp. It’s not necessarily bad to introduce more, as long as no one is forced to use it.”
Many new syntaxes have been invented for various Lisp dialects, including McCarthy’s original M-expression notation for Lisp. However, nearly all of these past notations fail to be general (i.e., the notation is independent of an underlying semantic) or homoiconic (i.e., the underlying data structure is clear from the syntax). We believe a Lisp-based notation needs to be general and homoiconic. For example, Lisp-based languages can trivially create new semantic constructs (e.g., with macros) or be used to process other constructs; a Lisp notation that is not general will always lag behind and lack the “full” power of s-expressions.
Recently, using indentation as the sole grouping construct of a language has become popular (in particular with the advent of the Python programming language). This approach solves the problem of indentation going out of sync with the native grouping construct of the language, and exploits the fact that most programmers indent larger programs and expect reasonable indentation by others. Unfortunately, the Python syntax uses special constructs for the various semantic constructs of the language, and the syntaxes of file input and interactive input differ slightly.
SRFI-49 defined a promising indentation-sensitive syntax for Scheme. Unfortunately, SRFI-49 had some awkward usage issues, and by itself it lacks support for infix notation (e.g., {a + b}) and prefix formats (e.g., f(x)). Sweet-expressions build on and refine SRFI-49 by addressing these issues. Real programs by different authors have been written using sweet-expressions, demonstrating that sweet-expressions are a practical notation. See the design rationale for a detailed discussion on how and why it is designed this way.
Sweet-expressions are general and homoiconic, and thus can be easily used with other constructs such as quasiquoting and macros. In short, if a capability can be accessed using s-expressions, then they can be accessed using sweet-expressions. Unlike Python, the notation is exactly the same in a REPL and a file, so people can switch between a REPL and files without issues. Fundamentally, sweet-expressions define a few additional abbreviations for s-expressions, in much the same way that 'x is an abbreviation for (quote x).
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.
“Sweet-expressions” (aka “t-expressions”) deduce parentheses from indentation. A sweet-expression reader MUST interpret its input as follows when indentation processing is active:
A sweet-expression reader MUST apply these rule clarifications:
#;
datum comment comments out the next neoteric expression,
not the next sweet expression.
Datum comments ignore intervening whitespace, including spaces, tabs, and newlines.A sweet-expression reader MUST implement these sweet-expression “advanced features”:
\\
is specially interpreted.
If any terms precede it on the line, it is called SPLIT,
and it MUST be interpreted
as if it started a new line, at the current line’s indentation.
If no terms precede \\
on the line,
it is called GROUP,
and it represents no symbol at all,
located at that indentation (GROUP is useful for lists of lists).$
(aka SUBLIST) MUST restart list processing.
If $
is preceded by any terms on the line,
the right-hand-side (including its sub-blocks)
is the last parameter of the left-hand side
(of just that line).
If there’s no left-hand-side,
the right-hand-side is put in a list.
The markers for the advanced sweet-expression features MUST only be accepted as such when indentation processing is active, and character sequence MUST NOT be considered one of those markers if it does not begin with exactly the marker’s first character. For example, {$} MUST NOT be interpreted as the SUBLIST marker; instead, it MUST be interpreted as the symbol $.
A sweet-expression reader is a datum reader
that can correctly read and map sweet-expressions as defined above
(including the advanced sweet-expression features).
An implementation of this SRFI MUST accept
the directive #!sweet
followed by a whitespace character
in its standard datum readers (e.g., read
and, if applicable,
the default implementation REPL).
This directive MUST be consumed and considered whitespace.
After reading this directive, the reader MUST accept
sweet-expressions in subsequent datums read from the same port,
until some other conflicting directive is given.
Once a sweet-expression reader is enabled,
the #!sweet
directive MUST be accepted and ignored.
A #!curly-infix
SHOULD cause the current port to switch to SRFI-105
semantics (e.g., sweet-expression indentation processing is disabled).
A #!no-sweet
SHOULD cause the current port to
disable sweet-expression indentation processing and
MAY also disable curly-infix expression processing.
A sweet-expression reader SHOULD support SRFI-30 (Nested Multi-line comments) (#| ... |#) and SRFI-62 (S-expression comments) (#;datum). A sweet-expression reader SHOULD support SRFI-22 (Running Scheme Scripts on Unix) (where #!+space ignores to the end of the line), #! followed by a letter as a directive (such as #!fold-case) that is delimited by a whitespace character or end-of-file, and the formats #!/ ... !# and #!. ... !# as multi-line non-nesting comments.
Implementations of this SRFI MAY
implement sweet-expressions in their datum readers by default,
even when the #!sweet
directive is not (yet) received.
Portable applications SHOULD include the #!sweet
directive before using sweet-expressions, typically near the top of a file.
Portable applications SHOULD NOT
use this directive as the very first characters of a file
because they might be misinterpreted on some platforms
as an executable script header; preceding this directive with a newline
avoids this problem.
Implementations MAY provide the procedures sweet-read as a sweet-expression reader and/or neoteric-read as a neoteric-expression reader. If provided, these procedures SHOULD support an optional port parameter.
Implementations SHOULD enable a sweet-expression reader when reading a file whose name ends in “.sscm” (Sweet Scheme). Application authors SHOULD use the filename extension “.sscm” when writing portable Scheme programs using sweet-expressions.
Note that, by definition, this SRFI modifies lexical syntax.
Implementations MAY provide a tool, called an “unsweetener”, that reads sweet-expressions and writes out s-expressions. An unsweetener SHOULD specially treat lines that begin with a semicolon when they are not currently reading an expression (e.g., no expression has been read, or the last expression read has been completed with a blank line). Such a tool SHOULD (when outside an expression) copy exactly any line beginning with semicolon followed by a whitespace or semicolon. Such a tool SHOULD (when outside an expression) also copy lines beginning with “;#” or “;!” without the leading semicolon, and copy lines beginning with “;_” without either of those first two characters. Application authors SHOULD follow a semicolon in the first column with a whitespace character or semicolon if they mean for it to be a comment.
A program editor MAY usefully highlight blank lines (as they separate expressions) and lines beginning at the left column (as these start new expressions). We RECOMMEND that program editors highlight expressions whose first line is indented, to reduce the risk of their accidental use.
The following BNF rules define sweet-expressions; a sweet-expression reader MUST implement the productions below unless otherwise noted. The BNF is intended to capture the specification above; in case of (unintentional) conflict, the specification text above governs. The BNF is an LL(1) grammar, written using ANTLR version 3.
In the summarized BNF below, the action rules inside {...} are in Scheme syntax. You can also separately view the full ANTLR BNF definition of sweet-expressions with Java action rules, along with a support Java class Pair.java.
As with SRFI-49, we model input as being preprocessed and having INDENT and DEDENT tokens inserted to represent the addition or removal of indentation; a single end-of-line may translate to a single EOL followed by multiple DEDENT tokens. (The indent and dedent non-terminals just refer to INDENT and DEDENT respectively.) If the indentation is invalid, BADDENT is generated which is not matched by the grammar.
A sweet-expression reader MUST support three modes: indentation processing, enclosed (when inside pairs of parentheses, brackets, or curly braces, recursively), and initial indent. On initialization a sweet-expression reader MUST be in indentation processing mode. An initial indent MUST enter indentation processing mode, which MUST end on an end-of-line sequence. The markers \\, $, <*, *>, and the abbreviations followed by horizontal space MUST only have their special meaning in indentation processing mode.
There are a few special non-terminals that act essentially as comments and are used to clarify the grammar; each matches an empty sequence:
The error non-terminal makes it clear where a sequence is not defined by this specification, and thus recommends where a parser might specifically check for errors. It also also acts as a check on the grammar itself (to help warn the BNF developers of unintended interpretation). Note that errors can occur elsewhere, and an implementation MAY include an extension where an error is noted in this grammar.
The BNF productions below are intentionally written so that they can be easily implemented using a recursive descent parser that corresponds to the given rules. In particular, the rules are given so that it would be easy to implement a parser that does not consume characters unless necessary and to not require multi-character unread-char (this makes it easy to reuse an underlying read procedure). However, no particular implementation approach is required. Unlike the SRFI-49 BNF, this BNF makes comment and whitespace processing explicit, to make comment and whitespace processing requirements clear.
A sweet-expression reader MUST read n-expression tails greedily. That is, if a potential tail begins with an opening parenthesis, bracket, or brace, it MUST be considered a tail; otherwise, it MUST NOT be considered a tail.
The BNF depends on this utility function:
; If x is a 1-element list, return (car x), else return x (define (monify x) (cond ((not (pair? x)) x) ((null? (cdr x)) (car x)) (#t x)))
Here is the actual BNF:
SPACE : ' '; TAB : '\t'; PERIOD : '.'; // Special markers, which only have meaning outside (), [], {}. GROUP_SPLIT : {(indent_processing)}? => '\\' '\\'; // GROUP/split symbol. SUBLIST : {(indent_processing)}? =>'$'; COLLECTING : {(indent_processing)}? => '<*' { restart_indent_level()} ; // This generates EOL + (any DEDENTs ) + COLLECTING_END, and restores indents: COLLECTING_END : {(indent_processing)}? => t='*>' {process_collecting_end($t)}; RESERVED_TRIPLE_DOLLAR : {(indent_processing)}? => '$$$'; // Reserved. // Abbreviations followed by certain whitespace are special: APOSW : {(indent_processing)}? => '\'' (SPACE | TAB) ; QUASIQUOTEW : {(indent_processing)}? => '\`' (SPACE | TAB) ; UNQUOTE_SPLICEW : {(indent_processing)}? => ',@' (SPACE | TAB) ; UNQUOTEW : {(indent_processing)}? => ',' (SPACE | TAB) ; // Abbreviations followed by EOL also generate abbrevW: APOS_EOL : {(indent_processing)}? => '\'' EOL_SEQUENCE SPECIAL_IGNORED_LINE* i=INDENT_CHARS_PLUS {emit_type(APOSW); emit_type(EOL); process_indent($i.text $i)}; QUASIQUOTE_EOL : {(indent_processing)}? => '\`' EOL_SEQUENCE SPECIAL_IGNORED_LINE* i=INDENT_CHARS_PLUS {emit_type(QUASIQUOTEW); emit_type(EOL); process_indent($i.text $i)}; UNQUOTE_SPLICE_EOL: {(indent_processing)}? => ',@' EOL_SEQUENCE SPECIAL_IGNORED_LINE* i=INDENT_CHARS_PLUS {emit_type(UNQUOTE_SPLICEW); emit_type(EOL); process_indent($i.text $i)}; UNQUOTE_EOL : {(indent_processing)}? => ',' EOL_SEQUENCE SPECIAL_IGNORED_LINE* i=INDENT_CHARS_PLUS {emit_type(UNQUOTEW); emit_type(EOL); process_indent($i.text $i)}; // Abbreviations not followed by horizontal space are ordinary: APOS : '\''; QUASIQUOTE : '\`'; UNQUOTE_SPLICE : ',@'; UNQUOTE : ','; // Special end-of-line character definitions. fragment EOL_CHAR : '\n' | '\r' ; fragment NOT_EOL_CHAR : (~ (EOL_CHAR)); fragment NOT_EOL_CHARS : NOT_EOL_CHAR*; fragment EOL_SEQUENCE : ('\r' '\n'? | '\n'); // Comments. LCOMMENT=line comment, scomment=special comment. LCOMMENT : ';' NOT_EOL_CHARS ; // Line comment - doesn't include EOL BLOCK_COMMENT : '#|' // This is #| ... #| (options {greedy=false;} : (BLOCK_COMMENT | .))* '|#' ; DATUM_COMMENT_START : '#;' ; // SRFI-105 notes that "implementations could trivially support // (simultaneously) markers beginning with #! followed by a letter // (such as the one to identify support for curly-infix-expressions), // the SRFI-22 #!+space marker as an ignored line, and the // format #!/ ... !# and #!. ... !# as a multi-line comment." // We'll implement that approach for maximum flexibility. SRFI_22_COMMENT : '#! ' NOT_EOL_CHARS ; SHARP_BANG_FILE : '#!' ('/' | '.') (options {greedy=false;} : .)* '!#' (SPACE|TAB)* ; // These match #!fold-case, #!no-fold-case, #!sweet, and #!curly-infix; // it also matches a lone "#!". The "#!"+space case is handled above, // in SRFI_22_COMMENT, overriding this one: SHARP_BANG_MARKER : '#!' (('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9'|'-')*)? (SPACE|TAB)* ; // IMPORTANT SUPPORTING PARSER DEFINITIONS for the BNF hspace : SPACE | TAB ; // horizontal space // Production "abbrevw" is an abbreviation with a following whitespace: abbrevw returns [Object v] : APOSW {'quote} | QUASIQUOTEW {'quasiquote} | UNQUOTE_SPLICEW {'unquote-splicing} | UNQUOTEW {'unquote} ; // Production "abbrev_no_w" is an abbreviation without a following whitespace: abbrev_no_w returns [Object v] : APOS {'quote} | QUASIQUOTE {'quasiquote} | UNQUOTE_SPLICE {'unquote-splicing} | UNQUOTE {'unquote}; abbrev_all returns [Object v] : abbrevw {$abbrevw} | abbrev_no_w {$abbrev_no_w} ; // Production "n_expr" is a full neoteric-expression as defined in SRFI-105. // n_expr does *not* consume any following horizontal space. // Uses "n_expr_noabbrev", an n-expression with no leading abbreviations: n_expr returns [Object v] : abbrev_all n1=n_expr {(list $abbrev_all $n1)} | n_expr_noabbrev {$n_expr_noabbrev} ; // Production "n_expr_first" is a neoteric-expression, but leading // abbreviations cannot have an whitespace afterwards (used by "head"): n_expr_first returns [Object v] : abbrev_no_w n1=n_expr_first {(list $abbrev_no_w $n1)} | n_expr_noabbrev {$n_expr_noabbrev} ; // Production "scomment" (special comment) defines comments other than ";": sharp_bang_comments : SRFI_22_COMMENT | SHARP_BANG_FILE | SHARP_BANG_MARKER ; scomment : BLOCK_COMMENT | DATUM_COMMENT_START (options : {greedy=true} hspace)* n_expr | sharp_bang_comments ; // Production "comment_eol" reads an optional ;-comment (if it exists), // and then reads the end-of-line (EOL) sequence. EOL processing consumes // additional comment-only lines (if any) which may be indented. comment_eol : LCOMMENT? EOL; // KEY BNF PRODUCTIONS for sweet-expressions: // Production "collecting_tail" returns a collecting list's contents. // Precondition: At beginning of line. // Postcondition: Consumed the matching collecting_end. // FF = formfeed (\f aka \u000c), VT = vertical tab (\v aka \u000b) collecting_tail returns [Object v] : it_expr more=collecting_tail {(cons $it_expr $more)} | (initial_indent_no_bang | initial_indent_with_bang)? comment_eol retry1=collecting_tail {$retry1} | (FF | VT)+ EOL retry2=collecting_tail {$retry2} | collecting_end {'()} ; // Production "head" reads 1+ n-expressions on one line; it will // return the list of n-expressions on the line. If there is one n-expression // on the line, it returns a list of exactly one item; this makes it // easy to append to later (if appropriate). In some cases, we want // single items to be themselves, not in a list; function monify does this. // The "head" production never reads beyond the current line // (except within a block comment), so it doesn't need to keep track // of indentation, and indentation will NOT change within head. // The "head" production only directly handles the first n-expression on the // line, and then calls on "rest" to process the rest (if any); we do this // because in a few cases it matters if an expression is the first one. // Callers can depend on "head" and "rest" *not* changing indentation. // On entry, all indentation/hspace must have already been read. // On return, it will have consumed all hspace (spaces and tabs). // Precondition: At beginning of line+indent // Postcondition: At unconsumed EOL head returns [Object v] : PERIOD /* Leading ".": escape following datum like an n-expression. */ (hspace+ (pn=n_expr hspace* (n_expr error)? {(list $pn)} | empty {(list '.)} ) | empty {(list '.)} ) | COLLECTING hspace* collecting_tail hspace* (rr=rest {(cons $collecting_tail $rr)} | empty {(list $collecting_tail)} ) | basic=n_expr_first /* Only match n_expr_first */ ((hspace+ (br=rest {(cons $basic $br)} | empty {(list $basic)} )) | empty {(list $basic)} ) ; // Production "rest" production reads the rest of the expressions on a line // (the "rest of the head"), after the first expression of the line. // Like head, it consumes any hspace before it returns. // The "rest" production is written this way so a non-tokenizing // implementation can read an expression specially. E.G., if it sees a period, // read the expression directly and then see if it's just a period. // Precondition: At beginning of non-first expression on line (past hspace) // Postcondition: At unconsumed EOL rest returns [Object v] : PERIOD /* Improper list */ (hspace+ (pn=n_expr hspace* (n_expr error)? {$pn} | empty {(list '.)}) | empty {(list '.)}) | scomment hspace* (sr=rest {$sr} | empty {'()} ) | COLLECTING hspace* collecting_tail hspace* (rr=rest {(cons $collecting_tail $rr)} | empty {(list $collecting_tail)} ) | basic=n_expr ((hspace+ (br=rest {(cons $basic $br)} | empty {(list $basic)} )) | empty {(list $basic)} ) ; // Production "body" handles the sequence of 1+ child lines in an it_expr // (e.g., after a "head"), each of which is itself an it_expr. // It returns the list of expressions in the body. // Note that an it-expr will consume any line comments or hspaces // before it returns back to the "body" production. // Since (list x) is simply (cons x '()), this production always does a // cons of the first it_expr and another body [if it exists] or '() [if not]. body returns [Object v] : i=it_expr (same ( {isperiodp($i)}? => f=it_expr dedent {$f} // Improper list final value | {! isperiodp($i)}? => nxt=body {(cons $i $nxt)} ) | dedent {(list $i)} ) ; // Production "it_expr" (indented sweet-expressions) // is the main production for sweet-expressions in the usual case. // Precondition: At beginning of line+indent // Postcondition: it-expr ended by consuming EOL + examining indent // Note: This BNF presumes that "*>" generates multiple tokens, // "EOL DEDENT* COLLECTING_END", and resets the indentation list. // You can change the BNF below to allow "head empty", and handle dedents // by directly comparing values; then "*>" only needs to generate // COLLECTING_END. But this creates a bunch of ambiguities // like a 'dangling else', which must all be disambiguated by accepting // the first or the longer sequence first. Either approach is needed to // support "*>" as the non-first element so that the "head" can end // without a literal EOL, e.g., as in "let <* y 5 *>". it_expr returns [Object v] : head (options {greedy=true} : ( GROUP_SPLIT hspace* /* Not initial; interpret as split */ (options {greedy=true} : // To allow \\ EOL as line-continuation, instead do: // comment_eol same more=it_expr {(append $head $more)} comment_eol error | empty {(monify $head)} ) | SUBLIST hspace* /* head SUBLIST ... case */ (sub_i=it_expr {(append $head (list $sub_i))} | comment_eol error ) | comment_eol // Normal case, handle child lines if any: (indent children=body {(append $head $children)} | empty {(monify $head)} /* No child lines */ ) // If COLLECTING_END doesn't generate multiple tokens, can do: // | empty {(monify $head)} )) | (GROUP_SPLIT | scomment) hspace* /* Initial; Interpet as group */ (group_i=it_expr {$group_i} /* Ignore initial GROUP/scomment */ | comment_eol (indent g_body=body {$g_body} /* Normal GROUP use */ | same ( g_i=it_expr {$g_i} /* Plausible separator */ /* Handle #!sweet EOL EOL t_expr */ | comment_eol restart=t_expr {$restart} ) | dedent error )) | SUBLIST hspace* /* "$" first on line */ (is_i=it_expr {(list $is_i)} | comment_eol error ) | abbrevw hspace* (comment_eol indent ab=body {(append (list $abbrevw) $ab)} | ai=it_expr {(list $abbrevw $ai)} ) ; // Production "t_expr" is the top-level production for sweet-expressions. // This production handles special cases, then in the normal case // drops to the it_expr production. // Precondition: At beginning of line // Postcondition: At beginning of line // The rule for "indent processing disabled on initial top-level hspace" // is a very simple (and clever) BNF construction by Alan Manuel K. Gloria. // If there is an indent it simply reads a single n-expression and returns. // If there is more than one on an initially-indented line, the later // horizontal space will not have have been read, so this production will // fire again on the next invocation, doing the right thing. t_expr returns [Object v] : comment_eol retry1=t_expr {$retry1} | (FF | VT)+ EOL retry2=t_expr {$retry2} | (initial_indent_no_bang | hspace+ ) (n_expr {$n_expr} /* indent processing disabled */ | ((scomment (options {greedy=true} : hspace)* sretry=t_expr {$sretry})) | comment_eol retry3=t_expr {$retry3} ) | initial_indent_with_bang error | EOF {(generate_eof)} /* End of file */ | it_expr {$it_expr} /* Normal case */ ;
Here are some examples and their mappings. Note that a sweet-expression reader would accept either form in all cases, since a sweet-expression reader is for the most part a traditional s-expression reader with support for some additional abbreviations.
Sweet-expressions (t-expressions) | s-expressions |
---|---|
define fibfast(n) ; Typical function notation if {n < 2} ; Indentation, infix {...} n ; Single expr = no new list fibup n 2 1 0 ; Simple function calls |
(define (fibfast n) (if (< n 2) n (fibup n 2 1 0))) |
define fibup(max count n-1 n-2) if {max = count} {n-1 + n-2} fibup max {count + 1} {n-1 + n-2} n-1 |
(define (fibup max count n-1 n-2) (if (= max count) (+ n-1 n-2) (fibup max (+ count 1) (+ n-1 n-2) n-1))) |
define factorial(n) if {n <= 1} 1 {n * factorial{n - 1}} |
(define (factorial n) (if (<= n 1) 1 (* n (factorial (- n 1))))) |
define represent-as-infix?(x) and pair? x is-infix-operator? car(x) list? x {length(x) <= 6} |
(define (represent-as-infix? x) (and (pair? x) (is-infix-operator? (car x)) (list? x) (<= (length x) 6))) |
define line-tail(x) cond null?(x) '() pair?(x) append '(#\space) exposed-unit car(x) line-tail cdr(x) #t append LISTSP.SP exposed-unit(x) |
(define (line-tail x) (cond ((null? x) (quote ())) ((pair? x) (append '(#\space) (exposed-unit (car x)) (line-tail (cdr x)))) (#t (append LISTSP.SP (exposed-unit x))))) |
g factorial(7) my-pi() #f() -i -(cos(0)) |
(g (factorial 7) (my-pi) (#f) 0-i (- (cos 0))) |
aaa bbb ; Comment indent ignored cc dd |
(aaa bbb (cc dd)) |
let ; Demo GROUP \\ var1 cos(a) var2 sin(a) body... |
(let ( (var1 (cos a)) (var2 (sin a))) body...) |
myfunction ; Demo SPLIT x: \\ xpos y: \\ ypos |
(myfunction x: xpos y: ypos) |
sin 0 \\ cos 0 |
(sin 0) (cos 0) |
run $ grep |-v| "xx.*zz" <(oldfile) >(newfile) |
(run (grep |-v| "xx.*zz" (< oldfile) (> newfile))) |
a b $ c d e f $ g |
(a b (c d e f g)) |
define extract(c i) $ cond vector?(c) $ vector-ref c i string?(c) $ string-ref c i pair?(c) $ list-ref c i else $ error "Not a collection" |
(define (extract c i) (cond ((vector? c) (vector-ref c i)) ((string? c) (string-ref c i)) ((pair? c) (list-ref c i)) (else (error "Not a collection")))) |
define merge(< as bs) $ cond null?(as) $ bs null?(bs) $ as {car(as) < car(bs)} $ cons car as merge < cdr(as) bs else $ cons car bs merge < as cdr(bs) |
(define (merge < as bs) (cond ((null? as) bs) ((null? bs) as) ((< (car as) (car bs)) (cons (car as) (merge < (cdr as) bs))) (else (cons (car bs) (merge < as (cdr bs)))))) |
' a b ; Demo abbreviations ' c d e \\ 'f g h |
(quote (a b (quote (c d e)) ((quote f) g h))) |
let <* x sqrt(a) *> ! g {x + 1} {x - 1} |
(let ((x (sqrt a))) (g (+ x 1) (- x 1))) |
let <* x $ {oldx - 1} \\ y $ {oldy - 1} *> ! {{x * x} + {y * y}} |
(let ((x (- oldx 1)) (y (- oldy 1))) (+ (* x x) (* y y))) |
let <* x $ cos $ f c *> ! dostuff x |
(let ((x (cos (f c)))) (dostuff x)) |
ff ; Demo comments #| qq |# t1 t2 t3 t4 t5 #| xyz |# t6 t7 #;t8(q) t9 |
(ff (t1 t2) (t3 t4 (t5 t6) (t7 t9))) |
f ; Demo improper lists a . b |
(f (a . b)) |
; Demo BEGIN with an indent f(a) g(x) |
(f a) (g x) |
define init(win area) let $ style $ get-style win set! back-pen $ black style set! fore-pen $ white style let \\ config $ make-c area expose $ make-e area set! now expose dostuff config expose |
(define (init win area) (let ((style (get-style win))) (set! back-pen (black style)) (set! fore-pen (white style)) (let ( (config (make-c area)) (expose (make-e area))) (set! now expose) (dostuff config expose)))) |
We have separated the design rationale from the overall rationale, as was previously done by SRFI-26, because it is easier to understand the design rationale after reading the specification. It is long because we wish to describe, in some detail, why things are done the way they are, including some helpful comparisons to other efforts.
The following subsections describe the overall basic approach that sweet-expressions take to improve s-expression readability.
There have been a huge number of past efforts to create readable formats for Lisp-based languages, going all the way back to the original M-expression syntax that Lisp’s creator expected to be used when programming. Generally, they’ve been unsuccessful, or they end up creating a completely different language that lacks the advantages of Lisp-based languages.
After examining a huge number of them, David A. Wheeler noticed a pattern: Past “readable” Lisp notations typically failed to be general or homoiconic:
See http://www.dwheeler.com/readable/readable-s-expressions.html for a longer discussion on past efforts. In any case, now that this pattern has been identified, new notations can be devised that are general and homoiconic - avoiding the problems of past efforts.
Sweet-expressions were specifically designed to be general and homoiconic, and thus have the possibility of succeeding where past efforts have failed.
Some Lisp developers act as if Lisp notation descended from the gods, and thus is impossible to improve. The authors do not agree, and instead believe that Lisp notation can be improved beyond the notation created in the 1950s. The following is a summary of a retort to those who believe Lisp notation cannot be improved, based on the claims in the Common Lisp FAQ and “The Evolution of Lisp” by Guy Steele and Richard Gabriel. Below are quotes from those who argue against improvement of s-expression notation, and our replies.
The Common Lisp FAQ says that people “wonder why Lisp can’t use a more ‘normal’ syntax. It’s not because Lispers have never thought of the idea - indeed, Lisp was originally intended to have a syntax much like FORTRAN...”.
This is an argument for our position, not for theirs. In other words, even Lisp’s creator (John McCarthy) understood that directly using s-expressions for Lisp programs was undesirable. No one argues that John McCarthy did not understand Lisp. Since even Lisp’s creator thought traditional Lisp notation was poor, this is strong evidence that traditional s-expression notation has problems.
“The Evolution of Lisp” by Guy Steele and Richard Gabriel (HOPL2 edition) says that, “The idea of introducing Algol-like syntax into Lisp keeps popping up and has seldom failed to create enormous controversy between those who find the universal use of S-expressions a technical advantage (and don’t mind the admitted relative clumsiness of S-expressions for numerical expressions) and those who are certain that algebraic syntax is more concise, more convenient, or even more natural...”.
Note that even these authors, who are advocates for s-expression notation, admit that for numerical expressions they are clumsy. We agree that slavishly copying Algol is not a good idea. However, sweet-expressions do not try to create an “Algol-like” syntax; sweet-expressions are entirely general and not tied to a particular semantic at all.
That paper continues, “We conjecture that Algol-style syntax has not really caught on in the Lisp community as a whole for two reasons. First, there are not enough special symbols to go around. When your domain of discourse is limited to numbers or characters, there are only so many operations of interest, and it is not difficult to assign one special character to each and be done with it. But Lisp has a much richer domain of discourse, and a Lisp programmer often approaches an application as yet another exercise in language design; the style typically involves designing new data structures and new functions to operate on them - perhaps dozens or hundreds” and it’s just too hard to invent that many distinct symbols (though the APL community certainly has tried). Ultimately one must always fall back on a general function-call notation; it’s just that Lisp programmers don’t wait until they fail.”
This is a weak argument. Practically all languages allow compound symbols made from multiple characters, such as >=; there is no shortage of symbols. Also, nearly all programming languages have a function-call notation, but only Lisp-based languages choose s-expressions to notate it, so saying “we need function call notation” do not excuse s-expressions. You do not need legions of special syntactic constructs; sweet-expressions allow developers to express anything that can be expressed with s-expressions, without being tied to a particular semantic or requiring a massive set of special symbols.
“Second, and perhaps more important, Algol-style syntax makes programs look less like the data structures used to represent them. In a culture where the ability to manipulate representations of programs is a central paradigm, a notation that distances the appearance of a program from the appearance of its representation as data is not likely to be warmly received (and this was, and is, one of the principal objections to the inclusion of loop in Common Lisp).”
Here Steele and Gabriel are extremely insightful. Today we would say that s-expressions are “homoiconic”. Homoiconic notations are extremely rare, and this property (homoiconicity) is an important reason that Lisps are still used decades after their development. Steele and Gabriel are absolutely right; there have been many efforts to create readable Lisp formats, and they all failed because they did not create formats that accurately represented the programs as data structures. A key and distinguishing advantage of a Lisp-like language is that you can treat code as data, and data as code. Any notation that makes this difficult means that you lose many of Lisp’s unique advantages. Homoiconicity is critical if you’re going to treat a program as data. To do so, you must be able to easily “see” the program’s format. If you can, you can do amazing manipulations.
But what Gabriel and Steele failed to appreciate in their paper is that it’s possible to have a notation that is general, homoiconic, and easier to read. Now that we understand why past efforts failed, we can devise notations that are general and homoiconic - and succeed!
Many people have noted that there are tools to help deal with s-expressions, but this misses the point. If the notation is so bad that you need tools to deal with it, it would be better to fix the notation. The resulting notation could be easier to read, and you could focus your tools on solving problems that were not self-inflicted. In particular, “stopping to see the parentheses” is a sign of a serious problem - the placement of parentheses fundamentally affects interpretation, and serious bugs can hide there.
Others who have used Lisp for years, such as Paul Graham, see s-expressions as long-winded, and advocate for the use of “abbreviations” that can map down to an underlying s-expression notation. Sweet-expressions take this approach.
Making indentation syntactically meaningful eliminates many parentheses, eliminating the need for humans to keep track of them. Real Lisp programs are already indented anyway; currently tools (like editors and pretty-printers) are used to try to keep the indentation (used by humans) and parentheses (used by the computers) in sync. By making the indentation (which humans depend on) actually used by the computer as well, they are automatically kept in sync.
On Lisp’s Readability and Parenthesis Stacking shows one of the many examples of endless closing parentheses and brackets to close an expression, and the confusion that happens when indentation does not match the parentheses. bhurt’s response to that article is telling: “I’m always somewhat amazed by the claim that the parens ‘just disappear’, as if this is a good thing. Bugs live in the difference between the code in your head and the code on the screen - and having the parens in the wrong place causes bugs. And autoindenting isn’t the answer - I don’t want the indenting to follow the parens, I want the parens to follow the indenting. The indenting I can see, and can see is correct.”
An IDE can help keep the indentation consistent with the parentheses, but needing IDEs to use a language is considered by some a language smell. If you need special tools to work around problems with the notation, then the notation itself is a problem.
A solution, of course, is to make the indentation actually matter: Now you don’t need an endless march of parentheses, and indentation can’t be confusing because it is actually used.
“In praise of mandatory indentation...” notes that it can be helpful to have mandatory indentation:
It hurts me to say that something so shallow as requiring a few extra spaces can have a bigger effect than, say, Hindley-Milner type inference. - Chris Okasaki
Other languages, including Python, Haskell, Occam, and Icon, use indentation to indicate structure, so this is a proven idea. Other recently-developed languages like Cobra (a variant of Python with strong compile-time typechecking) have decided to use indentation too, so clearly indentation-sensitive languages are considered useful by many.
One problem with indentation as syntactically relevant is that some transports drop leading space and tab characters. As discussed in the indentation characters section, we have solved this as well.
There’s a lot of past work on indentation to represent s-expressions. Examples include:
The sweet-expression indentation system is based on Scheme SRFI-49 (“surfi-49”), aka I-expressions. The basic rules of SRFI-49 (I-expression) indentation are kept in sweet-expressions; these are:
These basic rules seem fairly intuitive and do not take long to learn. We’re grateful to the SRFI-49 author for his work, and at first, we just used SRFI-49 directly.
However, SRFI-49 turned out to have problems in practice when we tried to use it seriously. For example, in SRFI-49, leading blank lines could produce the empty list () instead of being ignored, limiting the use of blank lines and leading to easy-to-create errors. As specified, a SRFI-49 expression would never complete until after the next expressions’s first line was entered, making interactive use extremely unpleasant. Lines with just spaces and tabs would be considered different from blank lines, creating another opportunity for difficult-to-find errors. The symbol group is given a special meaning, which is inconsistent with the rest of Lisp (where only punctuation has special syntactic meanings). The mechanism for escaping the group symbol was confusing. There were also a number of defects in both its specification and implementation.
Thus, based on experience and experimentation we made several changes to it. First, we fixed the problems listed above. We also addressed supporting other capabilities, namely, infix notation and allowing formats like f(x) (see neoteric expressions as defined in SRFI-105). We also found that certain constructs were somewhat ugly if indentation is required, so we added sublists, split, and collecting list capabilities.
The very existence of SRFI-49 shows that others believe there is value in using syntactically-significant indentation. We are building on the experience of others to create what we hope is a useful and refined notation.
Some Scheme users and implementers may not want indentation-sensitive syntax, or may not want to accept any change that could change the interpretation of a legal (though poorly-formatted) s-expression. For those users and implementers, SRFI-105 adds infix support and neoteric-expressions such as f(x), but only within curly braces {...}, which are not defined by the Scheme specification anyway. SRFI-105 makes it easier to describe the “leaves” of an s-expression tree.
In contrast, sweet-expressions extend SRFI-105 by making it easier to describe the larger structure of an s-expression. It does this by treating indentation (which is usually present anyway) as syntactically relevant. Sweet-expressions also allow neoteric-expressions outside any curly braces. By making sweet-expressions a separate tier, people can adopt curly-infix if they don’t want indentation to have a syntactic meaning or want to ensure that f(x) is interpreted as two separate datums (f and (x)).
An obvious question is, “how do you write them out?” After all, with these notations there is more than one way to present expressions.
But no Lisp guarantees that what it writes out is the same sequence of characters that was written. For example, (quote x) when read might be written back as 'x, while on others, reading 'y might be printed as (quote y). Similarly, if you enter (a . (b . ())), many Lisps will write that back as “(a b)”. Nothing has fundamentally changed; as always, you should implement your Lisp expression writer so that it presents a format convenient to both human and machine readers.
Backwards compatibility with traditional Lisp notation is helpful. A reader that can also read traditional s-expressions, formatted conventionally, is much easier to switch to.
The sweet-expression notation is fully backwards-compatible with well-formatted Lisp s-expressions. Thus, a user can enable sweet-expressions and continue to read and process traditionally-formatted s-expressions as well. If an s-expression is so badly formatted that it would be interpreted differently, that s-expression could first be sent through a traditional s-expression pretty-printer and have the problem resolved.
The changes that can cause a difference in interpretation are due to the active use of neoteric-expressions outside of {...}, unlike SRFI-105, and because of the indentation processing.
Neoteric-expressions are compatible for “normal” formatting. The key issue is that neoteric-expressions change the meaning of an opening parenthesis, bracket, or brace after a character other than whitespace or another opening character. For example, a(b) becomes the single expressions “(a b)” in sweet-expressions, not the two expressions “a” followed later by “(b)”. There are millions of lines of Lisp code that would never see the difference. So if you wrote “a(b)” expecting it to be “a (b)”, you will need to insert the space before the opening parenthesis. We believe such s-expressions are poorly (and misleadingly) formatted in the first place; you should write “a (b)” if you intend for these to be two separate datums.
Sweet-expressions add indentation processing, but since indentation is disabled inside (...), and initial indentation also disables indentation processing, ordinary Lisp expressions immediately disable indentation processing and typically don’t cause issues. In rare circumstances they can be interpreted differently:
The following subsections describe the specific sweet-expression constructs related to whitespace, indentation, and comment handling, including why they are defined the way they are.
In sweet-expressions, a blank line always terminates a datum, once an expression has started; if (another) expression has not started, blank lines are ignored. That means that in a REPL, once you’ve entered a complete expression, “Enter Enter” will always end it. The “blank lines at the beginning are ignored” rule eliminates a usability problem with the original SRFI-49 (I-expression) spec, in which two sequential blank lines before an expression surprisingly returned (). This was a serious usability problem. The sample implementation did end expressions on a blank line - the problem was that the spec didn’t clearly capture this.
Allowing a blank line to end an expression represents a trade-off between REPL use and use in a file. In a file, a top-level expression could be determined simply by noting that the next expression began on the left column. But this would be hideous to use in a REPL, because it would mean that the results of an expression would only be evaluated after the first (and possibly only) line of the next expression was entered. (Early Pascal I/O implementations had similar problems.)
One solution is to have a special text marker that means “done” (e.g., “.” on a line by itself), but this makes interactive use much less pleasant, since users then have to repeatedly type the special “end-of-expression” marker. As Beni Cherniavsky-Paskin observed on the readable-discuss mailing list (2013-01-16), “I absolutely hate SQL prompts that don’t execute until I add a ;”. Another solution, already in sweet-expressions, is quickly executing one-line commands by typing an indent character first. But users will often not know exactly how long an expression will be until it is done, so this does not help enough.
In contrast, pressing Enter twice is quite easy (since the user’s finger is already on Enter to press it the first time). Thus, the blank line rule is intentionally chosen to help interactive users, at a mild cost to non-interactive users (who then cannot use blank lines without ending the expression).
It would be possible to have blank lines end an expression only in interactive use. In particular, Python does this, since it has different rules for interactive use and files. However, this means that you couldn’t cut-and-paste files into the REPL interpreter and use them directly. David A. Wheeler believes it’s important to have exactly the same syntax in both cases in a Lisp-based system, because in Lisp-based systems, switching between the REPL and files is extremely common. By making “Enter Enter” always end an expression, the notation stays consistent.
Of course, people sometimes want to have something like a blank line in the middle of an s-expression. The solution is that comment-only lines using “;” (indented or not) are completely ignored and not even considered blank lines. That means you can use comment-only lines for the purpose of separating sections in a single datum. The indentation of comment-only lines is intentionally ignored; that way, you don’t have to worry about making sure that comment indentation matches its surroundings. We’ve found that in practice this works very well. In very long expressions (e.g., for a set of definitions in a library), a collecting list can typically be used.
Since a line with only indentation may look exactly identical to a blank line, we decided to clearly state that “a line with only indentation is an empty line”. This eliminates some nasty usability problems that could arise if a “blank” line was interpreted differently if it had some whitespace in it; a silent error like this could be hard to debug.
It is not possible to see trailing horizontal space on most screens and printouts. Thus, the BNF is defined so that in most cases trailing horizontal space is ignored (except in special cases such as being inside a string constant).
Some like to use spaces to indent; others like tabs. Python allows either, and SRFI-49 allows either as well - you just have to be consistent. Sweet-expressions continues this tradition, and is defined so that people can use what they like. The only rule is that they must be consistent; if a line is indented with eight spaces, the next line cannot be indented with a tab.
One objection that people raise about mandatory indentation is that horizontal whitespace can get lost in many transports (HTML readers, etc.). In addition, sometimes there are indented groups that you’d like to highlight; traditional whitespace indentation provides no opportunity to highlight indented groups specially. When discussing syntax, users on the readable-discuss mailing list started to use characters (initially period+space) to show where indentation occurred so that they wouldn’t get lost or to highlight them. Eventually, the idea was hit upon that perhaps sweet-expressions needed to support a non-whitespace character for indentation. This is highly unorthodox, but at a stroke it eliminates the complaints some have about syntactically-important indentation (because it is lost by some transports), and it also provides an easy way to highlight particular indented groups.
At first, we tried
to use period, or period+space, as the indent, as this was vaguely
similar to its use in some tables of contents.
But period has too many
other traditional meanings in Lisp-like languages, including beginning
a number (.9), beginning a symbol (...), and as a special operator to
set the cdr of a list.
Implementation of period as an indent character
is much easier if there is a way to perform two-character lookahead
(e.g., with an unread-char
function),
but unread-char
is not standard in Scheme R5RS,
and Common Lisp does not mandate support for two-character lookahead.
Eventually the “!” was selected instead; it
practically never begins a line, and if you need it, {!...} will work.
The exclamation point is much easier to implement as an indent character,
and it is also a great character for highlighting indented groups.
Indentation processing is disabled inside (...), [ ... ], and { ... }. This was also true of SRFI-49, and of Python, and has wonderful side-effects:
This means that infix processing by curly-infix disables indentation processing; in practice this doesn’t seem to be a problem.
Initial indentation also disables indentation processing, which also improves backward compatibility and makes it easy to disable indentation processing where convenient.
This improves backward compatibility because a program that uses odd formatting with a different meaning for sweet-expressions is more likely to have initial indents. Even if this is not true, it’s trivially easy to add an initial indent on oddly-formatted old files. This provides a trivial escape, making it easy to support old files. Then even if you have ancient code with odd formatting, it would be likely to still “just work” if there is any initial indentation. We’d like this reader to be a drop-in replacement for read(), so minimizing incompatibilities is important.
There is a risk that this indentation will be accidental (e.g., a user might enter a blank line in the middle of a routine and then start the next line indented). However, this is less likely to happen interactively (users can typically see something happened immediately), and editors can easily detect and show where surprising indentation is occurring (e.g., through highlighting), so this risk appears to be minimal.
Disabling on initial indent also deals with a subtle problem in implementation. We would create significant reader implementation problems if we tried to accept expressions that began with arbitrary indentation on the first line (using that indentation as the starting point). Typically readers return a whole value once that value has been determined, and in many cases it’s tricky to store state (such as that new indentation value) for an arbitrary port. By disabling indentation processing, we eliminate the need to store such state, as well as giving users a useful tool.
Since this latter point isn’t obvious, here’s a little more detailed explanation. Obviously, to make indentation syntactically meaningful, you need to know where an expression indents, and where it ends. If you read in a line, and it has the same indentation level, that should end the previous expression. If its indentation is less, it should close out all the lines with deeper or equal indentation. But we’re trying to minimize the changes to the underlying language, and in particular, we don’t want to change the “read” interface and we’re not assuming arbitrary amounts of unread-char. Scheme R5RS, for example, doesn’t have a standard unread-char at all. Now imagine that the implementation tries to support arbitrary indentatation for the initial line of an expression (instead of requiring that expressions normally start at the left edge). Let’s say you are trying to read the following:
! ! foo ! ! ! bar ! ! eggs ! ! cheese
You might expect this to return three datums: (foo bar), eggs, and cheese. It won’t, in a typical implementation; here’s why:
Some solutions:
So for all the reasons above, initial indent disables indentation processing for that line.
A line that starts with a ;
after
the indent is completely ignored,
including the indent of that line.
In contrast, a line that starts with
a #;
datum comment
or a #|
... |#
block comment
after a possible indent is considered
to be indented
at the position where the comment starts.
This means that
in sweet-expressions,
;
line comments
have a subtly different semantic meaning
from datum or block comments.
These are the reasons for this difference between line comments and datum or block comments:
|#
terminator for block comments,
followed by ordinary datums.
We could have declared that block comments
that include newlines would have the comment-only lines deleted,
and block comments would have each character replaced with a space.
For example:
Original | Could’ve mapped to (but doesn’t!) |
---|---|
foo #|comment #1|# bar #|comment #2|# quux |
foo bar quux |
foo #| block comment |# bar quux |
foo bar quux |
foo bar #| ... |# quuxA simple “outright delete” would yield:
foo bar quuxThis is arguably a misleading translation.
define foo(x) #| | First, bar the x. | Then quux it so that x is no longer xuuq-able |# bar x quux x #| Need to quux here | to prevent conflicting with | the bar table |#Again, a simple “outright delete” would yield an empty line right after the “
define foo(x)
” line.
Instead, what we mandate is that,
if a block or datum comment immediately follows indentation,
it is deleted outright,
and replaced with GROUP/SPLIT (\\
).
Block or datum comments that do not follow indentation
are simply deleted without being replaced with anything:
Original | Maps to |
---|---|
define foo(x) #| | standalone comment |# #| pre-comment |# bar #| in-comment |# quux |
define foo(x) \\ \\ bar quux |
Although the reasons above pertain mostly to block comments,
datum comments (#;
) are considered
essentially identical to block comments.
We could have mandated a different behavior between datum and block comments. But it is helpful to review the reason for the existence of datum comments. There are two major use cases:
(foo bar #;quux meow)
(define (foo x) (if (not (foo-able? x)) (error "Cannot foo the " x) (begin (en-bar x) ; quuxing is currently buggy #;(quux (barred-form x) (co-barred-form x) (de-xuuqed x)))))
For the last case, while typically a multi-line list
is commented out by using ;
line comments,
in standard s-expression syntax all closing parentheses
are “piled on” to the last line.
Using just ;
would also comment out
the closing parentheses of
begin
, if
, and define
.
But with sweet-expressions, there are no explicit closing parentheses. In sweet-expression form, using line comments suffices:
define foo(x) if not(foo-able?(x)) error "Cannot foo the " x begin en-bar x ; quuxing is currently buggy ;;quux ;; barred-form x ;; co-barred-form x ;; de-xuuqed x
Thus, the expected use case of datum comments in sweet-expressions is limited to the first case, i.e. commenting-out a single short item.
Since this first case can be handled sufficiently well
by having datum comments
take on the same behavior as block comments
(i.e. delete outright, if at start of line after
indent replace with \\
)
then it was considered simpler
to just use the same behavior for both.
This SRFI only requires support for the end-of-line sequences linefeed (LF), carriage return (CR), and CRLF. Earlier versions also supported reversed LFCR, IBM’s NEL (U+0085), Unicode line-separator (LS, U+2028), and Unicode paragraph-separator (PS, U+2029), but these have been dropped. This is because in practice the only end-of-line markers that are used in practice are LF, CR, and CRLF. For example, these are the only end-of-line markers included in Scheme R7RS draft 9.
John Cowan posted on 2013-02-28 that, “NEL is used only on EBCDIC systems, and conversion to ASCII usually changes it to LF rather than U+0085. LS was Unicode’s attempt to kill CR/LF/CR+LF, which failed completely...” The same problem applies to PS, which is not used in practice.
Reversed LFCR does not happen in practice, and attempting to detect it triggers a bug in many versions of the guile implementation of Scheme. In many versions of guile, peek-char consumes (instead of just peeking) an end-of-file (EOF) marker. Thus, after seeing an LF, peeking to see if there is a CR would consume any EOF after an LF, making ending interactive use awkward on systems that use just LF for end-of-line.
Non-empty files must end with an end-of-line sequence, before any end-of-file (EOF) marker, to be portable sweet-expression files. This limitation greatly simplifies the specification and implementation of a sweet-expression reader, without limiting the data that sweet-expressions can represent. In practice, text editors normally create such files anyway, so this is not a serious limitation.
This requirement is not unique to sweet-expressions. For example, several versions of the C language standard say “A source file that is not empty shall end in a new-line character, which shall not be immediately preceded by a backslash character” (section 2.1.1.2 of the ANSI C 1989 standard, section 5.1.1.2 of the ISO C 1999 standard, and section 5.1.1.2 of the ISO/IEC C 2011 standard ISO/IEC 9899:2011).
Sweet-expression reader implementations are free to warn about files that fail to meet this requirement. Sweet-expression reader implementations are also free to support files that do not meet this limitation. The sample reader accepts, in most cases, files that end without a preceding end-of-line sequence.
As described in the specification, a tool (called an “unsweetener”) that reads sweet-expressions and writes out s-expressions SHOULD specially treat certain lines that begin with semicolons.
The initial-semicolon rules for “;” followed by space or semicolon are given so that some comments - particularly the ones about major new components - are likely to be included in a translation from sweet-expressions to s-expressions (namely, any comments that precede an expression). This can greatly simplify examining the generated s-expression. The rules about “;#”, “;!”, and “;_” make it easier to write shell scripts and similar constructs with embedded sweet-expressions; these lines can invoke some Scheme interpreter, possibly via a shell.
This text is limited to only apply to lines outside of any sweet-expression. This is intentional, because this makes it easy to implement an unsweetener on top of an existing existing sweet-expression reader. The top-level unsweetener tool can simply see if a line begins with semicolon, and if it does, handle it specially; if it starts with an end-of-line, it can just copy it, and if a line starts with any other character it can call the sweet-expression reader to handle it. There is no requirement to copy block comments, or comments inside a sweet-expression datum, because this would be much more complicated to do; handling block comments is non-trivial functionality that a sweet-expression reader must perform, and there is no standard way to return comments inside a datum. Semicolon comments immediately after a datum need not be copied or processed specially, because a sweet-expression reader has to consume them to see if it’s reached the end of the datum. A Scheme implementation with unlimited unread could do more with relative ease, but since many Scheme implementations do not have unlimited unread, these limitations make implementation of such tools much simpler.
These rules are based on the unsweeten tool.
The following subsections describe other specific sweet-expression constructs, including why they are defined the way they are.
SFRI-49 had a mechanism for defining lists of lists, using the symbol “group”. This was a valuable contribution, since there needs to be some way to show lists of lists.
But after use, it was determined that having an alphabetic symbol being used to indicate a special abbreviation was a mistake. All other syntactically-special abbreviations in Lisp are written using punctuation; having one that was not was confusing. This symbol is still called the GROUP symbol, and happens at the start of a line (after indentation)... it is just now respelled as \\.
For example, this GROUP symbol makes it easy to handle multiple variables in a let expression:
let* \\ variable1 my(value1) variable2 my(value2) do-stuff1 variable1 do-stuff2 variable1 variable2
A different problem is that sometimes you’d like to have a set of parameters, where they are at the “same level” but writing them as indented parameters takes up too much vertical space. An obvious example is keywords in various Lisps; having to write this is painful:
foo keyword1: parameter1 keyword2: parameter2 ....
David A. Wheeler created an early splicing proposal. After much discussion, to solve the latter problem, the SPLIT symbol was created, so that you could do:
foo keyword1: \\ parameter1 keyword2: \\ parameter2 ....
Or, equivalently:
foo keyword1: \\ parameter1 keyword2: \\ parameter2
At first the symbol \ was used for SPLIT, but this would cause serious problem on Lisps that supported slashification. After long discussion, the symbol \\ was decided on for both; although the number of characters in the underlying symbol could vary (depending on whether or not slashification was used), this was irrelevant and seemed to work everywhere. By using the same symbol for both GROUP and SPLIT, we reduced the number of different symbols that users needed to escape.
We dropped the SRFI-49 method for escaping the symbol by repeating it (group group); the {} escape mechanism is more regular, and makes it far more obvious that some special escape is going on.
Since “let” occurs in many programs, it would have been possible to define \\ to allow this:
let ! \\ var1 $ bar x ! ! var2 $ quux x ! nitz var1 var2
We discussed this, but after long discussion we decided against this. There are other ways handling constructs like multi-variable let, also, if the first variable later has a more complex expression it cannot be so easily extended with indentation. Instead, we decided on defining “\\” as an empty symbol, making that expression exactly the same as:
let ! var1 $ bar x ! ! var2 $ quux x ! nitz var1 var2 ; => ; (let (var1 (bar x (var2 (quux x)))) ; (nitz var1 var2))
We did this intentionally. It turns out that there are situations where you want a \\ as an empty symbol, even when text follows it on the line. An example is arc’s if-then-else, where there are logically pairs of items, but from a list semantic are at the same level. E.G.:
if ! condition1() ! \\ action1() ! condition2() ! \\ action2() ! \\ otherwise-action()
For a more Scheme-centric viewpoint, some Scheme implementations use keyword objects. For example, in Guile, module declarations look like:
define-module ! \\ amkg cat meow ! #:use-module ! \\ amkg dog woof ! #:export ! \\ (meow hiss)
As noted earlier, there are other ways handling constructs like multi-variable let. You can use an empty GROUP symbol to achieve the same effect (at the cost of one more line). Also, the collecting list notation (<*...*>) handles short let variable assignment in a more graceful way. Thus, there was no strong reason to use the first semantic while there were many good reasons to choose the semantic actually chosen.
As with SRFI-49, a leading traditional abbreviation (quote, comma, backquote, or comma-at) right after any indent, and followed by space or tab, is that operator applied to the sweet-expression starting at the same line. For example, a complex indented structure can be quoted simply by prefixing a single quote and space. This makes it easy to add abbreviations to complex indented structures. An abbreviation alone on a line (after indentation), followed by an indented expression, applies that abbreviation to the expression; this seems to be what “users expect”, and supporting it eliminates a potential source of confusion.
On 2012-07-18, Alan Manuel Gloria noted that certain constructs were common and annoying to express, e.g., first(second(third(fourth))), and based on Haskell experience, suggested being able to write them as first $ second $ third(fourth). Again, the idea is that this is an abbreviation for a common-enough practice.
This is another example (like GROUP/SPLIT) of a construct that, when you need it, is incredibly useful. It’s not all that unusual to have a few processing or cleanup functions that take a single argument, and for all the “real work” to be nested in something else. This would require several levels of indentation without sublist, but they are easily handled with sublist.
An example is scsh, which has functions like “run” that are applied to another list. With sublist, this is easily expressed. For example, here’s a sweet-expression using scsh:
run $ grep |-v| "xx.*zz" <(oldfile) >(newfile)
(Oh, and a brief aside:
For full Scheme standards compliance, you should escape any symbol
beginning with “-” by surrounding it with |...|.
One problem is that RnRS does not require support for any
symbols that start with “-”,
as they are not in the set of defined <initial>
.
Many actual Schemes in practice do support such symbols,
including the sample implementation, but such code is not portable.
Another problem is that
“-i” is the negated square root of 1, so
that specific option is especially awkward.
The sample implementation supports |...|, so |-v| would work
and comply with the latest standards.
Note, however, that scsh does not yet directly support |...|.
These issues have nothing to do with sweet-expressions,
but we thought you should know about that.)
SUBLIST also makes certain idioms possible. For instance, some functions need to change their behavior based on the type of the inputs. Here’s an example, a definition that could take advantage of SRFI-105’s $bracket-apply$:
define c[i] cond vector?(c) vector-ref c i string?(c) string-ref c i pair?(c) list-ref c i else error "Not a collection"
This function shows a common occurrence
in Scheme programming:
A function that immediately begins with cond
.
The formatting of cond
above, however,
has several lines that consist of a single n-expression item
(e.g. “cond
”, “else
”,
“string?(c)
”, etc.).
Vertical space is precious. Using SUBLIST, we can compress the code to:
define c[i] $ cond vector?(c) $ vector-ref c i string?(c) $ string-ref c i pair?(c) $ list-ref c i else $ error "Not a collection"
Arguably,
this can be done by putting the cond
branches
in explicit parentheses.
However, the idiom supported by SUBLIST is more general
than explicit parentheses can be,
because SUBLIST does not disable indentation processing.
In particular,
this idiomatic formatting of cond
using SUBLIST
makes possible the following code:
define merge(< as bs) $ cond null?(as) $ bs null?(bs) $ as {car(as) < car(bs)} $ cons car as merge < (cdr as) bs else $ cons car bs merge < as (cdr bs)
Without SUBLIST, the more complex branches of the cond
would have to be formatted differently from the simpler branches
(unless you are willing to waste a line to write just “as
”),
or would be expressed in deeply-nested parentheses,
defeating the purpose of using sweet-expressions.
After discussion, SUBLIST was accepted in 2012-07-23.
a $ b
equivalent to (a b)
rather than (a (b))
?When initially learning SUBLIST, some people assume that “a $ b” should map to “(a (b))”. However, the specification specifically does not yield this semantic; “a $ b” maps to “(a b)”. At first, some people think that this is an inconsistency.
However, this is actually more consistent and produces better results.
SUBLIST ($
) does not imply
that the succeeding text should be a list;
instead, it denotes that the succeeding text
is the last argument of the current line.
More concretely, consider this code:
a b c d
The sub-list starting with b
is the last (and only) argument of a
,
the sub-list starting with c
is the last (and only) argument of b
,
and so on.
SUBLIST allows us to compress this text into a shorter form:
a $ b c d
We can repeat this:
a $ b $ c d
However, if a $ b
is
(a (b))
,
we need to stop at this point,
because:
Original | Maps to: |
---|---|
a b | (a b) |
Since outside of SUBLIST, we consistently map a singleton datum as that datum by itself, SUBLIST also consistently maps a singleton datum as that datum by itself.
By selecting this behavior, the example above can be expressed as:
Original | Equivalent to: | Maps to: |
---|---|---|
a b c d |
a $ b $ c $ d |
(a (b (c d))) |
This consistency is desirable;
let’s review the merge
example
from the previous question:
define merge(< as bs) $ cond null?(as) $ bs null?(bs) $ as {car(as) < car(bs)} $ cons car as merge < (cdr as) bs else $ cons car bs merge < as (cdr bs)
We can adopt a coding style
where the condition and the branch code
in a cond
expression
is separated consistently by a SUBLIST character.
This consistency is impossible
if SUBLIST always created a list
even in the case that the right-hand side
is a single datum.
Sweet-expressions without collecting lists (<* ... *>) work well in a vast number of circumstances. However, they can be somewhat awkward for two use cases:
Let’s begin with the first use case. When there is a long sequence of definitions contained within an initial statement, and no special notation like collecting lists, all the definitions in the long sequence must be indented and none can be separated by a blank line (since that would end the entire sequence, not just a definition). Indenting almost an entire file is annoying, and needing no blank lines for that long invites mistakes.
For example, here’s an example from the R7RS Scheme specification for define-library:
(define-library (example grid) (export make rows cols ref each (rename put! set!)) (import (scheme base)) (begin (define (make n m) (let ((grid (make-vector n))) (do ((i 0 (+ i 1))) ((= i n) grid) (let ((v (make-vector m #f alse))) (vector-set! grid i v))))) (define (rows grid) (vector-length grid)) (define (cols grid) (vector-length (vector-ref grid 0))) (define (ref grid n m) (and (< -1 n (rows grid)) (< -1 m (cols grid)) (vector-ref (vector-ref grid n) m))) (define (put! grid n m v) (vector-set! (vector-ref grid n) m v))))
This is easily reformatted into this sweet-expression, but notice the long sequence of indented definitions that, if long, loses a lot of horizontal space and can invite mistakes:
define-library example grid export make rows cols ref each rename(put! set!) import scheme(base) begin define make(n m) let (grid(make-vector(n))) do (i(0 {i + 1})) ! {i = n} grid ! let (v(make-vector(m #f alse))) vector-set!(grid i v) define rows(grid) vector-length(grid) define cols(grid) vector-length(vector-ref(grid 0)) define ref(grid n m) and {-1 < n < rows(grid)} {-1 < m < cols(grid)} vector-ref vector-ref(grid n) m define put!(grid n m v) vector-set!(vector-ref(grid n) m v)
But wholesale changes to sweet-expressions do not seem warranted for this special case, because there are reasons that sweet-expressions are defined the way they are. It is fundamental that a child line is indented from its parent, since that is the point of indentation. Opening a parentheses intentionally disables indentation processing; this is what developers typically expect (note that both Python and SRFI-49 do this), and it also makes sweet-expressions very backwards-compatible with traditional s-expressions. Ending a definition at a blank line is very convenient for interactive use, and interactive and file notation should be identical (since people often switch between them).
Now let’s look at the second use case.
The sweet-expression notation cleanly handles cases where let-expression
variables have complex values (e.g., using \\), but for simple cases
(1-2 variables having short initial values)
it can take up more vertical space than traditional formatting.
Using a leading “$” takes up somewhat less vertical space, but it still
takes up an additional line for a trivial case, it does not work
the same way for let expressions with 2 variables,
and David A. Wheeler thinks it is a rather unclear construction.
In particular, you cannot use
“$ x 5 $ y 7”
for a two-variable let statement; that would
map to
((x 5 (y 7)))
,
not
((x 5) (y 7))
.
You can also use parenthetical notation directly, but this is
relatively ugly and it is annoying to need to do this for a common case.
A similar argument applies to do-expressions, and these are
not at all unusual in Scheme code:
let ; Using \\ takes up a lot of vertical space in simple cases \\ x 5 {x + x} let \\ x 5 y 7 {x + x} let ; Less vertical space, but works for 1 variable only $ x 5 {x + 5} ; The two-variable format can be surprising and does not let the ; programmer emphasize the special nature of the variable assignments ; (compared to the later expressions in a let statement). let x(5) y(7) {x + 5} let (x(5)) ; Use parentheses {x + x} let (x(5) y(7)) {x + x}
A collecting list is surrounded by the markers <* and *>. The <* and *> represent opening and closing parentheses, but restart indentation processing at the beginning instead of disabling indentation processing, and collect any sweet-expressions inside. The purpose of collecting lists is to make it easy to clearly express these and similar use cases.
In a collecting list, horizontal spaces after the initial <* are consumed, and then sweet-expressions are read. These t-expressions must not be indented (though you can indent lines with only ;-comments).
Here an example of using collecting lists for the library structure above:
define-library example grid export make rows cols ref each rename(put! set!) import scheme(base) <* begin define make(n m) let (grid(make-vector(n))) do (i(0 {i + 1})) ! {i = n} grid ! let (v(make-vector(m #f alse))) vector-set!(grid i v) define rows(grid) vector-length(grid) define cols(grid) vector-length(vector-ref(grid 0)) define ref(grid n m) and {-1 < n < rows(grid)} {-1 < m < cols(grid)} vector-ref vector-ref(grid n) m define put!(grid n m v) vector-set!(vector-ref(grid n) m v) *>
Here are some examples of collecting lists for the let-variable cases:
let <* x 5 *> {x + x} ; ==> (let ((x 5)) (+ x x)) let <* x 5 \\ y 7 *> {x + x} ; ==> (let ((x 5) (y 7)) (+ x x))
The collecting list symbols are carefully chosen. The characters < and > are natural character pairs that are available in ASCII. What is more, they are not delimiters, so any underlying Scheme reader will not immediately stop on reading them (making it easier to reuse). The “*” is more arbitrary, but the collecting list markers need to be multiple characters to distinguish them from the less-than and greater-than procedures, and this seemed to be a fairly distinctive token that is rarely used in existing code.
It seems prudent to have a symbol available for future expansion. Thus, the marker $$$ is reserved for future use. This means that $$$ must be escaped (e.g., using {...}) if it is used in an indentation-processing context.
The following subsections compare sweet-expressions to a few of the many alternative notations that exist (including some alternatives created during its construction).
M-expressions (or meta-expressions) are a notation developed by John McCarthy, and were intended to be the primary notation for developing software in Lisp. As later explained by John McCarthy in “History of Lisp” (1979-02-12), “The project of defining M-expressions precisely and compiling them or at least translating them into S-expressions was neither finalized nor explicitly abandoned. It just receded into the indefinite future, and a new generation of programmers appeared who preferred internal notation to any FORTRAN-like or ALGOL-like notation that could be devised.”
Documents such as the LISP 1.5 Programmer’s Manual do hint at the intended syntax of M-expressions. Function names were written in lower case letters (to distinguish them from atoms, which were only upper case), followed by a pair of square brackets. Inside the square brackets were semicolon-separated arguments. Thus, the M-expression cons[A; (B C)] represented the s-expression (cons A (B C)); if computed it would produce (A B C). M-expressions included some other features, for example:
third[x]=car[cdr[cdr[x]]]
The fundamental problem with M-expressions was that they were not general. When a new syntactic structure was created (e.g., with a macro), the new construct could easily be accessed using s-expressions, but not with M-expressions. Also, M-expressions were never widely implemented; if you wanted to actually use a Lisp-based language, you had to use s-expressions.
Sweet-expressions avoid these problems of M-expressions. The sweet-expression notation is not tied to any particular semantic, and it has been implemented multiple times.
Honu, as described in Honu: Syntactic Extension for Algebraic Notation through Enforestation, is “a new language that fuses traditional algebraic notation (e.g., infix binary operators) with Scheme-style language extensibility. A key element of Honu’s design is an enforestation parsing step, which converts a flat stream of tokens into an S-expression- like tree, in addition to the initial ‘read’ phase of parsing and interleaved with the ‘macro-expand’ phase. We present the design of Honu, explain its parsing and macro-extension algorithm, and show example syntactic extensions.”
In particular, the Honu authors state that their “immediate goal is to produce a syntax that is more natural for many programmers than Lisp notation - most notably, using infix notation for operators - but that is similarly easy for programmers to extend. Honu adds a precedence-based parsing step to a Lisp-like parsing pipeline to support infix operators and syntax unconstrained by parentheses. Since the job of this step is to turn a relatively flat sequence of terms into a Lisp-like syntax tree, we call it enforestation. Enforestation is not merely a preprocessing of program text; it is integrated into the macro-expansion machinery so that it obeys and leverages binding information to support hygiene, macro-generating macros, and local macro binding - facilities that have proven important for building expressive and composable language extensions in Lisp, Scheme, and Racket.” An example of its syntax, per its paper, is:
function quadratic(a, b, c) { var discriminant = sqr(b) - 4 * a * c if ( discriminant < 0) { [] } else if (discriminant == 0) { [-b / (2 * a)] } else { [-b / (2 * a), b / (2 * a)] } }
At the surface, perhaps the most obvious difference is that Honu uses {} for major structures, in a way that looks somewhat similar to C, instead of using indentation. This means that, like Scheme and C, users must use tools to keep the visual indentation consistent with the {} that are actually used to nest constructs... leading to the risk that they will go out of sync (misleading human readers). Another obvious difference is that Honu supports user-defined precedence levels; as noted in SRFI-105, this causes trouble in dealing with operators if the precedence is defined differently in different code sections, and also makes it more difficult for human readers to determine where lists begin and end.
There are some surface similarities as well. Honu does support a more traditional-looking function call notation, of the form “quadratic(a, b, c)”. Sweet-expressions accept a similar function call format, though without the commas (which we found were annoying in practice, as they were extraneous and interfered with the comma operator). Both Honu and sweet-expressions accept infix notation, which are essentially universally used elsewhere, though with some minor differences in syntax (in part due to Honu’s use of precedence).
But Honu’s major approach is fundamentally different; the syntax is actually embedded with the language, making it difficult to separate the two: “To handle infix syntax, the Honu parser relies on an enforestation phase that converts a relatively flat sequence of terms into a more Scheme-like tree of nested expressions. Enforestation handles operator precedence and the relatively delimiter-free nature of Honu syntax, and it is macro-extensible. After a layer of enforestation, Scheme-like macro expansion takes over to handle binding, scope, and cooperation among syntactic forms. Enforestation and expansion are interleaved, which allows the enforestation process to be sensitive to bindings.” Honu’s approach enables new syntaxes and meanings to be installed, which its authors presumably expect to be a good thing, but this approach also has significant downsides.
Honu’s approach appears to impede generality. For example, {...} is defined as starting “a new sequence of expressions that evaluates to the last expression in the block.” Note that this definition is more than simply the definition of a list in terms of syntax; the notion of how to calculate it seems to be embedded in the syntax. Honu’s approach seems to be at odds with the idea that a notation should be independent of the evaluation approach.
Honu’s approach certainly sacrifices homoiconicity. The whole Honu process invokes macros that can transform the results. What’s more, these macros can be defined later. As a result, it is not possible to know what a syntactic construct means without knowing all the transformation definitions active at the time the construct was read. The precedence definitions for infix operators are an example of this problem, but this turns out to be systemic in Honu. In short, Honu’s approach is at odds with the idea that a human reader should be able to read just that surface syntax, without knowing anything about what macros are active, and still know what exactly what the underlying structure will be.
Another complication with Honu is that it is not backwards-compatible with existing Lisp constructs. In Honu, the “(expression)” production “performs the traditional role of parenthesizing an expression to prevent surrounding operators with higher precedences from grouping with the constituent parts of the expression”. It seems that internally, the base Honu reader does read it in as a single-item list. But the subsequent enforestation step removes any extra layers of parentheses. This semantic is similar to many other languages, but it means that a Honu reader cannot double as a Scheme reader. In contrast, most users could silently switch to a sweet-expression reader and have no idea that a change had occurred, since normally-formatted Scheme expressions will continue to work unchanged. This means it is much easier to transition to sweet-expressions.
Honu’s approach ties together
desugaring and macro-expansion;
the text “foo(bar, quux)
”
is two datums,
“foo
” and “(bar |,| quux)
”,
and the enforestation step
(which doubles as the macro-expansion step)
converts it to “(foo bar quux)
”
at the Racket level.
Honu’s macros are not actually the same type as
the hosting Racket implementation’s macros.
A honu-block
Racket macro
calls the enforest routine,
which then calls Honu-level macros.
Fundamentally, the Honu approach sacrifices both generality and homoiconicity to achieve readability. In addition, its use of {...} creates the risk that visual indentation will be inconsistent with the actual expression structure. We applaud Honu’s goal of readability, but do not believe its sacrifices are necessary to achieve that goal.
An interesting experimental notation, “Q2”, was developed by Per Bothner; see http://per.bothner.com/blog/2010/Q2-extensible-syntax/.
Q2 has somewhat similar goals to the “readable” project, though with a different approach. The big difference is that David A. Wheeler decided it was important to have a general notation for any s-expression. Here is a brief additional comparison:
()
after it or around it, e.g., pi().P4P: A Syntax Proposal by Shriram Krishnamurthi describes an alternative, more readable format for the Racket implementation of Scheme. There are some similarities, but many differences.
P4P supports functional name-prefixing such as f(x), just as sweet-expressions do. However, function parameters are separated by commas (an extra character not typical in Lisp code, and in our experiments something of a pain since parameters are very common). P4P does not support infix notation at all, even though practically all non-Lisp languages support them.
P4P has a very different view of indentation, compared to sweet-expressions. In P4P, indentation does not control semantics. Instead, “the semantics controls indentation: that is, each construct has indentation rules, and the parser enforces them. However, changing the indentation of a term either leaves the program’s meaning unchanged or results in a syntax error; it cannot change the meaning of the program.”
This means that P4P has a large number of special-case syntactic constructs. For example, defvar: and deffun: specially use “=”, if: has intermediate keywords, and so on. While this looks nice when you stay within its set, it encounters the same problem that McCarthy had with M-expressions: There are always new constructs, including ones in meta-languages (not the underlying Scheme implementation) and macros. The P4P author notes that, “it would be easy to add new constructs such as provide:, test:, defconst: (to distinguish from defvar:), and so on”, but this misses the point; the task of defining constructs inhibits the use of those constructs, and may be impractical if there are syntactic differences at different language levels. For example, imagine processing lists where “deffun” has a different definition than the underlying language; this is trivial with s-expressions and sweet-expressions, but not practical using P4P.
The P4P author notes that, “the parser can be run in a mode where indentation-checking is simply turned off... This can be beneficial when dealing with program-generated code.” However, now the developer must deal with enabling various modes, and this mode is needed not just for program-generated code, but for code that has mixtures of various languages. Rather than having multiple modes, a single mode that works everywhere seems more useful to the developers of the sweet-expression notation.
In short, P4P fails to be general; it is tied to specific semantics. Previous readability efforts, such as M-expressions, failed, and we believe that one reason was that those notations failed to be general. We applaud the admirable goals of P4P, but do not think it represents the best way forward.
However, while we believe different design choices need to be made, we applaud the effort. In addition, we believe that P4P is additional evidence that people are interested in improving the readability of Lisp, and that indentation can help do so.
The “Z” language by Chris Done (not related to the Z specification language) has been discussed on Reddit, and was reported to the readable-discuss mailinglist by Ben Booth on 2013-01-02. It’s an indentation-based lisp-like language, although the indentation rules differ somewhat from sweet-expressions.
In Z, a whitespace-separated sequence of terms applies to the next, so:
foo bar mu zot
would parse (in s-expression form) as (foo (bar (mu zot))). As its documentation states, “To pass additional arguments to a function, the arguments are put on the next line and indented to the column of the first argument”
This is an interesting approach, but David A. Wheeler agrees with 1337hephaestus_sc2 on Reddit: “The main idea seems clever, but also too clever.”
Here are a few issues with Z syntax compared to sweet-expressions:
fee fie foe fum foo barthis would be (fee (fie (foe fun (foo bar)))), but merely changing “fie” to “faction” would produce
fee faction foe fum foo barwhich would be interpreted as (fee (faction (foe fum) (foo bar))).
Genyris is another indentation-based Lisp. “All Genyris expressions are parsed and stored as linked-lists. A single line is converted into a single list. Sub-expressions are denoted in two ways, either within parentheses on a single line, or by an indented line. For example the following line contains two sub-expressions:
Alpha (Beta Charlie) (Delta)
“Sub-expressions made using parentheses must remain within a single line, they are not permitted to wrap. Indented lines are deemed to be sub-expressions of the superior, less indented, lines above. The above expression can be written in indented form as follows:”
Alpha Beta Charlie Delta
Thus, it is similar to the main rule of t-expressions, except that Genyris wraps “ALL sublines in lists, even if they consist of a single element.” As Beni Cherniavsky-Paskin notes, “It can get away with that simpler rule because all data objects are callable and eval to [themselves]... In fact it’s much cleverer, though that’s irrelevant for us. All objects are actually macros (“lazy functions” in the manual’s terminology). What objects do if called with arguments - e.g. (“foo” arg1 arg2) - is evaluate those arguments in a dynamic-binding env enriched by the object’s methods, and return the last value. Dynamic scope only affects names starting with a dot, other names use lexical scoping. All this forms a clever implementation of method calling:
"ball" (.replace "l" "na") "banana"
While interesting, this notation is less useful for general-purpose s-expressions, in particular, it makes it more difficult to notate simple atoms.
On 2013-02-08, Arne Babenhauserheide made an alternative indentation proposal and posted it on the readable-discuss mailing list.
Aside from the basic indentation-means-subitem, it has the following important points:
:
” indicates that an indentation
is explicitly placed at the column
where that marker is.
That is, you might conceptually consider it
as ending a line,
then inserting an indentation to that column position,
followed by the text after the :
.
As a precis, a :
on an indented line by itself
is a placeholder
indicating an indentation at its column position,
similar to our GROUP \\
marker.
For example, the following are equivalent:
Arne formulation | Basic indentation format | s-expression |
---|---|---|
let : : x : compute 'x : y : compute 'y use x y |
let : x compute 'x y compute 'y use x y |
(let ( (x (compute 'x)) (y (compute 'y))) (use x y)) |
Arne formulation | s-expression |
---|---|
foo (bar) 5 #f |
(foo) ((bar)) (5) (#f) |
.
”,
when it starts a line,
splices the list after it into the parent list.
This is primarily used
to turn the single-item lists
formed by the previous rule
into actual single datums.
Arne formulation | s-expression |
---|---|
foo bar . 5 . #f #t "hello" |
(foo (bar) 5 #f #t "hello") |
Arne formulation | s-expression |
---|---|
foo bar quux kuu nitz |
(foo (bar quux) (kuu nitz)) |
After being proposed, it was suggested that the rule 2 above should be amended to be similar to equivalent rules in SRFI-49 and this SRFI; that is, a single datum on a line by itself should be only that datum, not wrapped in a list. Further, a “.” marker followed by a single datum without a child line should be a no-op.
Rule 2 was formulated that way since the intention was to build an indentation processor, not a full parser. However, further discussion revealed that a simple rule could be formulated to differentiate between one-item and two-item lines; specifically, a space outside of parentheses or strings indicated that the line had two or more items. Thus even a simple indentation processor could support SRFI-49-like rule 2.
This proposal was initially quite attractive
(at least to Alan Manuel K. Gloria).
It is simpler to describe informally,
and appears, at first glance,
to replace many actual uses for
GROUP/SPLIT, SUBLIST, and COLLECTINGLIST.
Thus, it was hoped that these three extensions
could be removed with the simpler :
marker rule.
However, there are use cases
where SUBLIST has superior semantics
over Arne’s :
.
For instance, consider the following SUBLIST code:
call/cc $ lambda (exit) body ...
Replacing this with Arne’s :
requires
further indenting the body
to after the :
marker.
call/cc : lambda (exit) body ...
With Arne’s formulation, a trade-off exists:
either
(1) add a separate line for the lambda
(which increases vertical lines
in exchange for reduced indentation),
or (2) use :
(which increases horizontal indentation
in exchange for reduced vertical lines).
either (1) | or (2) |
---|---|
call/cc lambda (exit) body ... |
call/cc : lambda (exit) body ... |
SUBLIST is powerful precisely because it collects child lines. This allows you to simultaneously reduce horizontal indentation and vertical lines.
The :
and .
markers
are also insufficient replacements for GROUP/SPLIT.
At first glance it might seem that .
is superior
to the SPLIT meaning of \\
:
Arne’s formulation | sweet-expression |
---|---|
export . api-init api-use api-close |
export api-init \\ api-use \\ api-close |
But we expect that more typically, you want to express the code that looks like this:
Arne’s formulation | sweet-expression |
---|---|
begin . (display "Welcome, ") (display player) (display ", to the Dungeons!") (newline) |
begin display "Welcome, " \\ display player \\ display ", to the Dungeons!" \\ (newline) |
If you truly want several single items to be spliced, the following trick takes advantage of the fact that indentation processing is disabled inside parentheses:
export . ( api-init api-use api-close )
Arne’s formulation also does not have
a method to conveniently express
a single gigantic top-level datum
that contains several complex sub-datums,
a.k.a. the define-library
problem.
<* define-library \\ (example) import (scheme base) export . ( example-init example-open example-close ) <* begin define example-init() whatever ... ... define example-open(x) whatever ... ... define example-close(y) whatever ... ... *>; begin *>; define-library
We could retain COLLECTINGLIST, and live without the SPLIT behavior, or even SUBLIST, though this would be important losses. Conversely, they could be re-added, but at that point, its simplicity has completely disappeared. But these ignore the biggest problem.
The most important problem with this proposal is that it falsely assumes that it’s possible to know the visual width of different characters. In today’s world, this is impractical, especially across the many different implementations of Scheme and other Lisps.
Most obviously this presumption is false on systems with variable-width fonts, and these are widely used for email messages. You simply cannot presume you know anything about the actual widths of different character sequences in this case.
Even when only Western symbol sets are used, some letters can or must be expressed using combining characters. In these cases, what is stored as two characters are supposed to be displayed as one.
For another example, some East Asian characters,
called fullwidth characters,
should be displayed on two columns
even on a fixed-width font display.
In Arne’s formulation,
the width of non-whitespace characters
is significant,
since the :
marker can record
the column position
after non-whitespace characters occur.
This SRFI, on the other hand,
requires recording only the column position
of horizontal whitespace characters;
we handle the different possible widths
of the TAB character
by requiring consistent indentation.
Arne’s formulation requires either that implementations know all fullwidth characters (a much longer list than the list of horizontal whitespace characters), or would leave handling of fullwidth characters up to implementations, meaning that indentation expressions have potential portability problems.
Granted that almost all code will not utilize symbols containing fullwidth East Asian glyphs, one must consider strings containing fullwidth East Asian glyphs, which we expect to occur regularly in East Asia.
This also brings the issue of character encoding. To properly recognize fullwidth characters, the encoding must be known. Granted, many East Asian-specific encodings use two bytes for fullwidth characters, and one byte for halfwidth characters. So a simple byte-as-character interpretation would keep track of column positions correctly, if you are using such a East Asian-specific encoding. Until you re-encode the text into UTF-8.
UTF-8 use is spreading;
it can encode any Unicode code point,
and is largely back-compatible with ASCII.
But East Asian fullwidth characters
do not necessarily encode in two bytes in UTF-8.
Not to mention that many more characters in UTF-8
are encoded in 3 or more bytes
but do not take 3 or more columns,
just one.
Even if these characters do not not occur in identifiers,
the characters can occur in strings,
and such strings might usefully be placed before
a :
marker.
If we are sensitive to only initial indentation,
then we need only worry about the widths of two characters,
TAB and SPACE (and !
for this SRFI).
This causes no problems in this SRFI, because
indentation is required to be consistent across lines.
In contrast, in Arne’s proposal,
we need to worry about the widths of every character,
and also know the encoding.
Scheme code (and Lisp code in general)
will increasingly need to embed strings
with international (non-ASCII) characters,
and R7RS at least allows optional support for symbols
that contain international (non-ASCII) characters.
R6RS mandates that support.
After discussion, this proposal was turned down by the authors of this SRFI.
On 2013-02-18, Beni Cherniavsky-Paskin proposed an extension of SUBLIST semantics, to “allow closing SUBLIST by [partial] dedenting”. Informally, in Beni’s proposed extension, any occurrence of SUBLIST would mark a fresh indent level, which could be matched by an otherwise-unmatched dedent. For example:
Extended SUBLIST | Equivalent |
---|---|
outer1 outer2 $ inner1 ! ! inner2 ! outer3 |
outer1 outer2 ! inner1 ! ! inner2 ! outer3 |
let $ ! ! x $ compute 'x ! ! y $ compute 'y ! use x y |
let ! \\ ! ! x $ compute 'x ! ! y $ compute 'y ! use x y |
The original formal description by Beni Cherniavsky-Paskin,
as expanded by Alan Manuel K. Gloria,
involves moving SUBLIST and SPLIT processing
from the parser to the indentation preprocessor
(i.e. the part that inserts INDENT and DEDENT tokens).
In the current specifications, the indentation preprocessor
handles a stack of indentations
(in the implementation, a cons-cell stack of strings).
Beni’s formulation expands this stack
to include the special indentation marker ?
.
In the succeeding formal description,
we assume two variables,
the indentation-stack
and current-indentation
.
?
on indentation-stack
.
indentation-stack
’s top is ?
:
Pop off every ?
on top of indentation-stack
and emit DEDENT for each popped item.
(TAB | SPACE | !)*
)
and put it in current-indentation
.
Then:
indentation-stack
’s topmost non-?
item
is “not consistent” with current-indentation
,
signal a bad indent error (BADDENT).
indentation-stack
’s topmost non-?
item
is less than current-indentation
,
push current-indentation
on indentation-stack
and emit INDENT.
indentation-stack
’s topmost non-?
item
is equal to current-indentation
:
(note: this is a copy of 2.1 and 2.2 above)
indentation-stack
’s top is ?
:
Pop off every ?
on top of indentation-stack
and emit DEDENT for each popped item.
indentation-stack
’s topmost non-?
item
is greater than current-indentation
:
indentation-stack
’s topmost non-?
item
is less than or equal to current-indentation
;
emit a DEDENT for each popped item.
indentation-stack
’s topmost non-?
item
is equal to current-indentation
,
pop off all ?
and emit a DEDENT for each.
indentation-stack
’s top is ?
,
pop it off and push current-indentation
on the stack.
This extension of SUBLIST turns out to be backward-compatible with the current SUBLIST semantics, in the sense that any SUBLIST-using text constructed using the current SUBLIST semantics would have exactly the same meaning in Beni’s extended SUBLIST semantics. This is a significant advantage as it means we can apply this extended rule at any future time without fear of breaking existing code.
Alan Manuel K. Gloria was excited with this proposal, and considered it superior to his original SUBLIST formulation, but David A. Wheeler was much more reserved. The following concerns were noted about this formulation:
let
example above remains
(as of the time of this writing)
the only significant use case
for Beni’s extended SUBLIST formulation,
and there are already other relatively-painless
ways to handle this construct.
David A. Wheeler mentioned the possibility of using a PARTIAL_DEDENT token so that full Beni formulation of SUBLIST could be handled completely in the parser. This possibility has not been explored fully as yet. It may be explored if further use cases for the full Beni formulation are found in the future.
Alan Manuel K. Gloria continues to hold out hope that this extended formulation will get more use-cases, but decided not to press for immediate inclusion in this SRFI.
Beni Cherniavsky-Paskin’ himself noted that this proposal is “a backward-compatible extension to SUBLIST (similarly applicable to any competing FOOLIST semantics), so we could leave it undecided for now, and legalize it later...”. For the moment, that is what we have done; we have ensured that it could be added later if turns out to be important to do so.
On 2013-02-23, David A. Wheeler counterproposed (for purposes of experimentation) a subset of Beni Cherniavsky-Paskin’s proposal. He christened the approach “Beni-Lite”, and included a sample implementation using ANTLR and its BNF. This was eventually rejected, but we believe it’s important to document this approach - in part because it could be added later if desired.
In this alternative, a “$” can be closed by an unmatched partial dedent, but only if the “$” is at the end of a line and there is other text besides any indentation characters. The primary argument given for this variant is that it covers the primary use cases David A. Wheeler had seen, and it is possible to formulate this limited variant while continuing to use ANTLR’s grammar checking. It also retains stronger run-time input checking; partial dedents are only legal when including “$” at the end of the line, making them unlikely to use accidentally. It is still complicated, but it is not much more complicated than notations without unmatched dedents.
Here are some sample test cases to demonstrate its impact:
Original Input | s-expression |
---|---|
let $ ! ! var1 value1 ! body... |
(let ((var1 value1)) body...) |
let $ ! ! var1 value1 ! ! var2 value2 ! body... |
(let ((var1 value1) (var2 value2)) body...) |
let $ ! ! var1 value1 ! ! var2 value2 ! ! var3 value3 ! body1 param1 ! body2 param2 |
(let ((var1 value1) (var2 value2) (var3 value3)) (body1 param1) (body2 param2)) |
The sample implementation tweaked the indent processor so that if a dedent doesn’t match the parent indent, it generates DEDENT followed by a RE_INDENT. Here is an example of how the modified indent processor could tokenize its input:
Original Input | Tokenized version |
---|---|
let $ ! ! var1 value1 ! body... |
let SUBLIST EOL INDENT var1 value2 EOL DEDENT RE_INDENT body... |
The BNF was then changed so that SUBLIST allowed more constructs:
it_expr returns [Object v] : head ... | SUBLIST hspace* /* head SUBLIST ... case */ (sub_i=it_expr {(append $head (list $sub_i))} | comment_eol indent sub_b=body ( re_indent partial_out=body {(append (append $head (list $sub_b)) $partial_out)} | empty {(append $head (list $sub_b))} ) ) ... | SUBLIST hspace* /* "$" first on line */ (is_i=it_expr {(list $is_i)} | comment_eol indent sub_body=body {(list $sub_body)} )
However, Alan Manuel Gloria reviewed it and stated that, “I think that, conceptually, having a limitation is an additional complication when teaching the notation... Granted we could just mandate these patterns, but I worry that we are now slipping into the ‘notation is tied to underlying semantic’ bug. Or in this case, ‘notation is tied to underlying legacy syntax’. I’d rather have the full Beni formulation of SUBLIST or the classic 0.4 formulation, in that preference order. I’ll admit that I don’t have a use for the full Beni formulation other than for let, though. I suspect there may be further use cases; but I haven’t found any others yet.”
The current notation does not support either approach at this time. However, the BNF specifically requires that these constructs be detected and forbidden; that way, if future versions add these capabilities, it will be known that they cannot have any other meaning in existing sweet-expressions.
At least two programs have been written using sweet-expressions:
The SRFI authors believe that the existence of these programs - written by two different people for different application areas - shows that sweet-expressions are mature enough to be standardized.
In addition, the older paper Sweet-expressions: Version 0.2 (draft) created sweet-expressions versions of a variety of expressions in a variety of Lisp-based languages, to (1) ensure that the sweet-expression notation is general (not tied to some specific semantic), and (2) show that it is relatively easy to notate common constructs in sweet-expressions. Sweet-expressions were developed for expressions in Scheme, Common Lisp, Arc, ACL2, PVS, s-expression BitC, AutoCAD Lisp (AutoLisp), Emacs Lisp, SUO-KIF, Scheme Shell (Scsh), GCC Register Transfer Language (RTL), MiddleEndLispTranslator (MELT), Satisfiability Modulo Theories Library (SMT-LIB), NewLisp, Clojure, and ISLisp. (Clojure currently uses {...} for a different construct, but sweet-expressions could still be used for Clojure.) This demonstration provides evidence that the sweet-expression notation is sufficiently general and expressive.
The sweet-expression notation itself has been implemented at least twice; one in ANTLR (an LL(*) parser generator) and one in Scheme (as a recursive descent parser). Since it has been implemented two different ways, it is less likely to be extremely difficult to implement. The ANTLR grammar itself has been checked by ANTLR’s grammar checker for ambiguities and other problems. Also, ANTLR confirms that the given BNF grammar is LL(1). These implementations, and the ANTLR checking, suggest that this notation is not too difficult to implement and eliminates the risks of certain kinds of grammar flaws. These implementations have been peer reviewed. In addition, they have passed various test suites; the Scheme implementation in particular has passed a test suite with hundreds of test cases.
The Readable Lisp S-expressions Project developed these notations and implementations of them. In particular, the project distributes the programs unsweeten (which takes sweet-expressions and transforms them into s-expressions) and sweeten (which takes s-expressions and transforms them into sweet-expressions), as well as other related tools.
Here are some style guidelines that may help you create easy-to-read sweet-expressions, based on the Readable project style guide.
Mentally, this is pretty straightforward - on each line, write an expression; everything after the first term on the line, or all child lines, are parameters of the first term. You can use grouping operators ( ), [ ], and { } to put subexpressions on the same line, if you want. Use -( ... ) to negate something.
Whenever you have an infix expression, just surround it with {...}. You can use the form f(...) to call a function; if it has zero parameters, express it as f(), and if it has more than one parameter, separate the parameters with spaces. The f(...) form is especially handy for creating short expressions as a parameter on a line; for long expressions, use indentation instead.
If the function is typically written as infix (including “+”, “*”, “or”, and “<”), use {...} to write it as an infix value. Generally these operators will be “and”, “or”, or an operator that only uses punctuation. If you’re calling a function with only one parameter, and that parameter is calculated with an infix operation, use the f{...} shorthand.
However, you may want to keep using prefix form if indentation still matters and one or more of the parameters is exceedingly complex (e.g., it’s nested very deeply or includes program structuring forms like “cond” and “define”). This situation can often occur with “and” and “or” if you’re using a functional programming style.
In general, use indentation to make it easy to see the larger-scale structure of a program or data. Typically major structural atoms should start a new line, including defining a new term (e.g., “define” and “let”), conditionals (e.g., “if” and “cond”), and loops (e.g., “loop”).
When calling a function, if the parameters will fit easily on a line if you use function notation like f(x y(z)), then put them all on a line. When you’re calling a function with no parameters, use function-calling format with “()” at the end, e.g., “f()”. In general, indentation is used for the major “structural” elements of a program, and function calls get used once you’re “near the leaf” of structure (where you won’t go beyond the end of the line).
If you are providing a list of data (and not performing a function/method call), then use the traditional list notation such as “(a b c)”. This is exactly equivalent to “a(b c)”, but expressing it as a list will give the human reader a hint that this data is not considered a potential program. If it’s used as both data and as program, then consider it a program, and use function call notation.
Where it’s understandable, don’t include unnecessary parentheses. In particular, when indentation processing is active, the name of the function is right after the indent, and there are no child lines, simply state the function followed by space-separated parameters.
You should probably stick to an 80-character width for program text.
Use a consistent amount of indenting for each level. We tend to use 2 spaces for indentation; indentation nesting is more common in sweet-expressions, so 8-character indentations are often too much.
Consider using “!” followed by space if you’re using a medium that hides indentation, or want to highlight a particular vertical group. However, beware if you start a paired expression and let it continue to the next line; the “!” is not an indent character inside parentheses, braces, or brackets.
The reference implementation is portable, with the exception that
Scheme provides no standard mechanism to override the built-in reader.
An implementation that complies with this SRFI must
at least activate this behavior
when they read the #!sweet
marker
followed by whitespace.
The reference implementation is SRFI type 2: “A mostly-portable solution that uses some kind of hooks provided in some Scheme interpreter/compiler. In this case, a detailed specification of the hooks must be included so that the SRFI is self-contained.”
See the Scheme source code for the reference implementation.
The readable project website has more information: http://readable.sourceforge.net
We thank all the participants on the “readable-discuss” and “SRFI-105” mailing lists, including John Cowan, Shiro Kawai, Per Bothner, Mark H. Weaver, Beni Cherniavsky-Paskin, Arne Babenhauserheide, Ben Booth, and many others whose names should be here but aren’t.
Copyright (C) 2012-2013 David A. Wheeler and Alan Manuel K. Gloria. All Rights Reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.