Front-ends are present in all applications that process source codecompilers, interpreters, linters, and the like. Common beliefs about the design of front-ends are that they are made up of phases (lexical, syntactic, and semantic analysis), are largely decoupled, and that lexical analysis consists of regular-expression matching. When these assumptions are true, the design of a parser is facilitated. Unfortunately, these assumptions aren't always true in the real world. In this article, I discuss the design of a front-end for ANSI C, and show the pitfalls that make it trickier than you'd expect.

Myth #1: Front-Ends Are Made Up of Independent Phases

A front-end is a program that accepts text input and produces a representation of that text in a desired forman abstract syntax tree, for instance. Again, front-ends are present in any application that manipulates source code in a nontrivial waycompilers, interpreters, pretty printers, code analyzers (such as splint or Purify), and so on.

Anyone who has taken a compiler course recognizes the classic flow diagram of a front-end in Figure 1. It includes three phaseslexical, syntax, and semantic analysis. The lexer and parser recognize the word-level and phrase-level structure of the input text. The lexer splits the undifferentiated stream of characters of the input program (after preprocessing, if any) into a sequence of tokens, each belonging to a well-defined category such as keyword, operator, literal constant, identifier, or type name. For example, the lexer determines that the consecutive characters "+=" constitute a single operator, rather than two separate "+" and "=" operators, and that "18.52" is a floating-point literal constant rather than the two integer literal constants "18" and "52" with a dot punctuator in the middle. The parser recognizes the language constructs (expressions, statements, declarations, and so on), and usually outputs the syntax tree corresponding to the program. Traditionally, lexers and parsers are obtained automatically by running tools such as flex and bison over a set of specifications. Lexers are specified as a set of actions, each associated to a regular expression; parsers are specified as a set of actions, each associated to BNF syntax rules. Finally, the semantic analyzer contains all the remaining application-dependent algorithms, which implement context checks (for example, making sure that there are no nested function definitions in a C program) and connect to the back-end.

Unfortunately, Figure 1 can be misleading in suggesting that the three phaseslexer, parser, semanticsare independent and operate sequentially without overlaps. In fact, the input can first be entirely tokenized, then its syntax tree derived, and finally all the semantics evaluated on top of this tree. Consequently, you'd think you could design the lexer, parser, and semantic analyzer as separate, distinct problems. While this might be true for simple languages and applications (such as command-line calculators), it isn't true for complex languages like C. For C, it isn't even possible to complete the lexical analysis of a token before some semantic analysis has been completed for the preceding input. For instance:

typedef unsigned long int ulint_t;

contains the declaration of a user type ulint_t. Parsing this declaration according to the grammar of C leads to the parse tree in Figure 2(a), where ulint_t must be tokenized as an IDENTIFIER. (I write tokens such as "IDENTIFIER" in uppercase to differentiate them from nonterminal symbols in the grammar, such as "expression," "statement," and "type_name"). As a consequence of the type declaration, if this declaration is encountered:

ulint_t my_ul_int;

ulint_t must be tokenized not as an IDENTIFIER (as before), but as a TYPE_NAME. The syntax tree of this declaration is in Figure 2(b).

[Click image to view at full size]

C lets you define new type names, and the standard C grammar is based on this distinction. Therefore, lexers must be able to actively enforce this distinction. Simple solutions to circumvent this difficulty do not help. For example, a lazy solution may abolish the distinction between IDENTIFIER and TYPE_NAME, and consider each name an IDENTIFIER, adapting the grammar accordingly. This solution fails, however, because it makes the grammar ambiguous, so that some declarations or statements may be parsed in multiple, inconsistent ways.

Two points are clear:

Lexers based on regular-expression matching alone cannot do the job. More intelligence is needed. The lexer needs to know the entire set of declarations and definitions, which are visible in the current scope. Lexers must access the symbol tables associated with the previous input in a way that is consistent with the scoping rules of the language. Unlike simple languages, for which the symbol table is entirely the semantic phase's business, the lexer operates on the basis of the information in the symbol table.

Symbol tables cannot be maintained by lexers alone; therefore, lexers, parsers, and semantic actions cannot operate sequentially. They must work in an intertwined way. Symbol tables cannot be maintained by lexers because understanding the structure of the declarations and definitions is the parser's job, and processing them is the responsibility of the semantic layer.

This leads to the structure in Figure 3a call graph representing the simplified structure of a front-end developed with flex and bison. The application-specific code calls the parser (traditionally, function yyparse()) just once to parse the entire input. The parser calls the lexer (function yylex()) as many times as there are input tokens in the input text. Every time the parser recognizes a syntax rule, an appropriate semantic action is triggered. If a declaration is recognized by the parser, the corresponding action adds a new entry in the symbol table. The new entry is visible to the lexer the next time it is invoked to fetch the following token, and it may influence whether the token is categorized as IDENTIFIER or TYPE_NAME.