Comp Design

Download as pdf or txt
Download as pdf or txt
You are on page 1of 604

Compiler Design

William A. Barrett San Jose State University CmpE 152

FALL 2005 VERSION


Copyright 2000, William A. Barrett All rights reserved

++

Table of Contents
Chapter 1: Writing Compilers and Interpreters...................................................................................... 3 Chapter 2: Regular Expressions and Finite State Machines ................................................................ 25 Chapter 3: Lexgen: A Lexical Analyzer Generator ............................................................................. 57 Chapter 4: Context-free Grammars...................................................................................................... 94 Chapter 5: Expression Semantics....................................................................................................... 117 Chapter 6: Symbol Tables.................................................................................................................. 126 Chapter 7: Top Down Parsing........................................................................................................... 144 Chapter 8: Parsing with Syntax Diagrams ......................................................................................... 157 Chapter 9: LR Bottom-up Parsing .................................................................................................... 198 Chapter 10: LR Parser Semantics ...................................................................................................... 235 Chapter 11: AST-based Code Optimization ...................................................................................... 265 Chapter 12: Type Declarations and Type Checking .......................................................................... 312 Chapter 13: Functions and Procedures in Pascal ............................................................................... 362 Chapter 14: Control Structures .......................................................................................................... 380 Chapter 15: Block Optimization ........................................................................................................ 413 Appendix 1: A C++ Primer................................................................................................................ 414 Appendix 2: The Intel 80x86 Microprocessor ................................................................................... 459 Appendix 3. The Intel FPU ............................................................................................................... 500 Appendix 4: A Pascal Grammar ........................................................................................................ 511 Appendix 5: Unix Tools..................................................................................................................... 533 Appendix 6: Microsoft Tools ............................................................................................................. 555 Appendix 7: Syngraph, A Recursive Descent Parser Generator........................................................ 566

Chapter 1: Writing Compilers and Interpreters, page 2

Chapter 1: Writing Compilers and Interpreters


William A. Barrett, San Jose State University nch1.doc Copyright 2000 William A. Barrett. All rights reserved. Reproduction or translation of any part of this work beyond that permitted by section 107 or 108 of the 1976 United States Copyright Act without the express written permission of the copyright owner is unlawful. Requests for permission or further information should be addressed to the author.

What's a Compiler?
Most programmers write software in a high-level language, such as Pascal, C, C++, java, Cobol and others. Many are not aware of the fact that they are really making use of a sophisticated program called a compiler that bridges the gap between their chosen language and a computer architecture. Some have no concept at all, or a very poor grasp of the computer's instruction set, memory organization, and other details that make their software work. A compiler therefore provides a valuable form of information hiding. Most programmers don't need to know anything about the details of translation and execution, only the properties claimed to be supported by the high-level language and its libraries. A professional computer engineer cannot afford to be ignorant of the details. Compilers are a valuable and reliable tool for the most part, but no software is perfect. If a bug arises in code, it's sometimes useful to be able to trace it down into the assembler/micro instruction level. High performance may require certain low-level operations. Finally, knowing something about the strategy used by a compiler helps the engineer understand the high-level language at a deeper level. We use the word translation in roughly the same sense as a person learns to translate from English to (say) German. Knowing the German equivalent of lots of English words is an important first step, and this may be the most difficult step for a person to take. However, there's more. Good German requires paying attention to gender in all nouns. A chair, a pencil, or an automobile in the Romance languages is either masculine, feminine, or neutral. The gender determines just how adjectives and verbs may be applied to the nouns. Also, the ordering in a German sentence is different than English. Germans love to place the verb at the end of the sentence, whereas in English, it's between the subject and the predicate. Throw father the stairs his coat down might be the way a German would express the sentence Throw fathers coat [to him] down the stairs. The alphabets are slightly different, though both English and German conform to the Latin alphabet to a considerable degree. Spoken German is yet another problem, since there are certain voice inflections that are unfamiliar and uncomfortable for the English speaker; also the words take on a pronunciation unlike that of English: letter w becomes a v sound, for example, so that wolf is spoken as vuulf. All in all, it's obvious to anyone who's studied a foreign language that you can't become fluent (or even well understood) in the language by just changing all your English words into the foreign equivalents. This is also why machine translations of natural language are at worst just wrong and at best, seem stilted and unnatural.

Chapter 1: Writing Compilers and Interpreters, page 3

A similar problem occurs in understanding the function of a compiler. Just changing names and operators into some kind of machine language equivalents isn't sufficient. Some of the features in the high-level language, such as types, don't even show up in the translation per se. They just influence the way in which other features are translated. We also must face up to understanding the organization of the target machine, its instruction set, and also know everything about the conventions of the symbolic assembler that the compiler intends to generate. In fact, we can argue that in order to design and implement a useful compiler, one must be reasonably expert in all these disciplines: the source language (what language we're trying to translate, i.e. Pascal), the host language (what programming language we're using to write the translator, i.e. C++), the host metalanguages (special little languages that describe tokens or language structure), the target language (what we're translating into, i.e. Microsoft symbolic assembler), and the target machine (what CPU will finally execute our program). compiler tools, the algorithms and data structures that are commonly used to design a compiler. If we have a poor understanding of any one of these, our effort at writing a compiler is doomed.

Why High-level Languages?


Let's review some of the more important reasons for using a high-language with a compiler, compared to directly coding assembler for a processor. Note first that an assembler such as MASM is itself a fairly sophisticated translator. Although in general an assembler converts one line of source code into one instruction, there are a large number of specialized operations required of a modern assembler. However, the most important case that can be made for a high-level language is that It bridges the gap between machine instructions and humans Regarding machine instructions and assembler: Instructions are primitive. Very simple operations, usually one line per instruction. Coding requires close attention to status bits and machine environment. Coding requires working with a finite set of registers and memory rather than mathematic entities. Addressing is very complicated and hard to get correct. Need hundreds of instructions to carry out a conceptually simple operation Hand-written assembly code locks you into a particular machine and machine version. Very hard to change assembler to a newer or more powerful machine ...but assembler is essential for certain low-level operations and/or highest performance. A high-level language and its compiler: Provides much higher productivity in software development. Approaches the performance of assembler for most operations. Provides an easy way to express complex operations. Provides modular decomposition of very complex problems into simpler components

Chapter 1: Writing Compilers and Interpreters, page 4

Provides ease of checking correctness by structured development and inspection Provides warnings of syntax and typing errors. Guarantees correctness of underlying assembler, if the source code is correct. Provides portability. All or most of a program can be recompiled on different computers. Can provide inline assembly for critical regions. (but assembler defeats portability!)

Everything Goes Through a Compiler


Almost every piece of software used on any computer has been produced by a compiler. This astonishing claim arises from the extreme difficulties faced by any human in writing absolute binary instructions for any computer. The "compiler" used may be rather primitive, perhaps just a simple symbolic assembler that maps symbols directly to instructions. But the use of symbols to specify computer operations is nearly universal. Just consider: Nearly every component of a typical operating system--its kernel, libraries, support utilities, etc.-are typically written in a high-level symbolic form, and translated through a compiler to machine form. Database management systems are written in high-level language form. The popular SQL database access language is an interpreted language: some machine translator is needed to make sense of its directions, and turn them into useful actions. HTML is an interpreted language--the "L" stands for "language": HyperText Markup Language. It's a powerful way to display text, images, forms, menus, and more. And it's fairly easy to write directly with a text editor. The popular HTML browsers, for example those written by Netscape and Microsoft, were written in a high-level language, most likely C++ or java. By using a high-level language, it was possible to almost automatically port the browser to a large number of different machine platforms with little additional human effort. Compilers themselves are written in a high-level language, sometimes in the source language of the compiler itself! A compiler for C was organized in a modular fashion years ago by Steve Johnson to make it easier to generate code for a variety of different machines. The popularity of C as a language largely stems from this effort. Once a reliable C compiler is ready for the platform, the Unix operating system and large number of libraries can be generated for the new platform, using that compiler as a basis. New microprocessor designs require one or more compilers for the machine. These compilers usually start with a cross-compiler, which runs on one established platform, but generates machine code for the new system. The cross-compiler is written in a high-level language using an established compiler. Again, C is popular for this kind of effort. The Gnu C/C++ compiler was designed to service almost any hardware platform. In general, it relies on a C compiler and a few libraries already provided by the machine's manufacturer, and uses these to construct a Gnu compiler. We should point out that each manufacturer's C compiler and library is different in subtle ways. The Gnu team therefore wrote an extensive set of little tests to see just how a particular platform differs from a standard one, then used that configuration information to develop a specialized (and optimized) compiler for the platform. With that compiler, many other Gnu tools written in its standard C language can be compiled for the platform.

Chapter 1: Writing Compilers and Interpreters, page 5

The Linux operating system rests on the ingenuity of the Gnu team in developing a standard platform based on their C compiler. A version of Linux is now available on a wide variety of platforms, including Pentiums, Sparc, most RISC machines and others. About the only requirement is that the platform processor support 32-bit addressing and some key operating system features, such as multitasking, hardware protection, user vs. system modes.

Productivity
Software development is an economic activity (someone has to pay for it and someone expects to get paid for their skilled time). Productivity with a suitable high-level language and its compiler tools is easily 50 to 500 times the productivity in assembler. The software development cost of a typical modern product may exceed the hardware development cost by a large factor, and can be very expensive for even a modest technical product. There's also a strong demand to get the software written and tested within an aggressively short deadline, because otherwise the competition may release a similar product first and capture the bulk of the market. Many large software products are expected to operate on different platforms, including some that haven't been developed yet. Having the code portable through being written in a high-level language is a major factor in reducing the cost or porting software to new platforms. All these considerations clearly call for the most productive approach to software development possible, with the software written in high-level languages that provide for the maximum in safety (freedom from bugs), portability, yet without compromising performance.

Why Study Compiler Theory?


We're going to explore just how a compiler is designed and organized. We'll see that its key components involve a kind of expert system, using production rules. The central challenge in a compiler is making sense of a long sequence of characters in a source file. This a challenging problem in computing, and in fact, is a form of artificial intelligence: how to map something that's clear and sensible to a human into something that's clear and sensible to a machine. Many people have dreamed of an intelligent machine such as HAL, the spaceship robot in the movie 2001-A Space Odyssey. HAL could talk to the human crew members, could understand their verbal commands, could provide intelligent criticism of their attempts at artwork, could compose and criticize artwork, and could even read lips! Very few of those human-like capabilities have actually been achieved in a machine, and of those that have, the machine falls far short of humans in performance. For example, you can often respond to a machine over the telephone through a voice message: "Say or press one". That's a form of speech recognition, and the machine recognition can figure out one out of a small number of responses. But it's useless to ask that machine to connect you to someone in the auditing department by name! The intelligence involved in a compiler is far more modest than reading lips or listening to speech. We meet the machine halfway by providing it with a file of ASCII characters. We also expect that the machine may not make sense of just anything written in that file; only certain carefully organized sequences of characters will do. Anything else is considered a "syntax error". As a human, it's up to us to figure out why the machine recognition failed, and then repair the error. Computers still do what they are told, not what we want. Yet even that limited form of AI is very useful to us. With a limited amount of study on our part, we can tell a machine what we want it do. The value to a computer engineering of understanding some compiler theory lies in these general observations: Virtually every transaction humans make with a machine is through some form of translation, Chapter 1: Writing Compilers and Interpreters, page 6

whether it be through a written file (compiler) or a graphics-user interface (GUI). And a GUI is in fact a disguised form of finite-state machine, which we will see is a valuable compiler tool in its own right. By understanding the nature of compilers and their limitations, we become better programmers. We acquire a much better understanding of the underlying machine operations. We appreciate to what extent a compiler language limits our ability to write really highperformance programs, whether we are aiming at minimal code or maximum speed. The effective use of production rule systems is similar to the effective use of C++ objects for the purpose of designing graphics user interface systems. That's how the best computing systems are designed, and we might as well get used to it! Compilers depend on a very well-developed and mature theory of parsing. Its study will prepare us for advanced studies in language theory and automata.

Compiler Organization
A compiler or translator is organized as a series of filters that read a source file and yield an object file or symbolic assembler file. These are called the preprocessor, the scanner, the parser, and the code generator respectively. See figure 1. The four filters may be separate tasks, linked together by pipes, or separate phases in a common program. source file Usually the preprocessor (if there is one) is a wholly separate program. It accepts a source file and delivers a source file to the scanner. The scanner phase accepts a source file and delivers a stream of tokens preprocessor to the parser phase. The scanner is also called a lexical analyzer. A token is a group of one or more characters in sequence. A scanner is intended as scanner an interface between the input stream and a parser. Its purpose is to filter the input character stream and generate a stream of tokens, skipping white space, and passing the tokens on to the parser. parser The parser phase will generate special data structures representing clauses, or collections of tokens, of the source language. These structures code generator are acted upon by the code generator to yield a target file. The target file may be symbolic assembly code for some CPU, or some other low-level language. target file Fig. 1. A compiler

Organization of Text Files


At some low level, a source text file must be read by the scanner on a

character-by-character basis. Text files are commonly prepared with a technical editor, and consist of ASCII printable characters (space, ASCII 0x20 through tilde, '~', ASCII 0x7E), tabs, and line endings only. A line consists of a sequence of printable characters terminated by a line ending. A line ending has different forms, depending on the operating system. In most Unix systems, a text file line ending is expressed by a single line feed character (ASCII 0xA, or \n in C). Under MSDOS, Microsoft Windows and OS/2, a text file line ending is expressed by a carriage-return line-feed pair. (The carriage-return character is ASCII 0xD, or \r in C). Reading a line from a text file is facilitated by several C/C++ library functions. We will use the C function fgets to fetch one line. This function expects a pointer to a string buffer, a maximum length, and a pointer to a FILE object. The text file should be opened through the C function fopen. This Chapter 1: Writing Compilers and Interpreters, page 7

function expects a pointer to a filename, and a mode option. If you use option "r", the text file will be opened for read-only access, and each line fetched by fgets will be terminated by one line feed character, followed by a null character (ASCII 0). If the line is longer than the specified maximum length, the line feed character will be absent, but the null character will always be there. The remainder of the line will be fetched on the next fgets call. (See any C reference manual or use the Unix man for details about these functions). A nice property of functions fgets and fopen is that they work exactly the same way in Unix and MSDOS. It is also supported in this standard way in most Windows/MSDOS compilers and most Unix compilers. Note that there many other ASCII characters that might be used in a text file. Some technical editors allow any ASCII character to be inserted in a text file, but this practice will cause complaints in our lexical analyzer. A program will clearly require some way of describing ASCII characters other than the printables, i.e. those with a character code greater than 0x7E, or less than 0x20. But these can only find their way into some internal char array, and we can express any ASCII character through a C/C++ backslash convention, for example:
'\xFF'

which describes the ASCII character code 0xFF. The program source code should not contain any ASCII characters other than \n, \t and space through tilde ('~', ASCII 0x7E). Only after the program is compiled, linked and executed may other ASCII characters appear in certain char variables or arrays. Providing for these special character conventions in C/C++ is a task for the lexical analyzer. Its FSM needs to be provided with appropriate regular expressions that describe such forms as '\xFF'.

Preprocessors
A preprocessor is a translator program that accepts some language's source file and generates a modified source file intended for further processing by a compiler. A familiar preprocessor is the C/C++ preprocessor, program cpp in Unix. This is fully defined in the ANSI C specification. cpp provides several services, as follows: Macros can be defined and used later in the source program, through #define Sections of code can be included or skipped through conditional macro commands, using #if and #ifdef. These must be terminated by a matching #endif, and can be nested. Errors can be induced in the preprocessor in case certain macro conditions are not met. Lines ending in a backslash are concatenated with the next line. Quoted strings may be concatenated by the preprocessor into single strings. Comments may be removed. External files may be included, through #include. The resulting output file will of course be stripped of various source material. It may be expanded by including external files or expanding macros, or collapsed through conditionals. Since most compilers need to refer to the original source lines in order to generate meaningful error messages, the preprocessor must attach line number information to the generated file in a form that the compiler will accept. Otherwise, lines in the expanded file will not correspond to lines in the original source file. A C preprocessor cannot be constructed from a pure FSM. It requires symbol table services (to support the #define macros) and a pushdown stack (to support nested #if ... #endif structures). It's normally written as a standalone program that accepts a source file as stdin, and generates an expanded processed text file as stdout.

Chapter 1: Writing Compilers and Interpreters, page 8

A C preprocessor is a fairly easy program to write. It essentially copies most of the characters in the file from the input to the output, except when the first character in a line is "#". When the preprocessor detects that first character, it springs into life. A command word follows that "#". Here are some of the common command actions: #define name string. The name is entered into a symbol table and associated with string. Whenever name appears later in the source file (or in another #define string), it's looked up in the preprocessor symbol table and replaced with its associated string. #if expression. If the expression evaluates to "true", the following source lines are copied. Otherwise, they are skipped. A variation is: #ifdef name, which expects a preprocessor name. Here, the copying or skipping is controlled by whether name appears in the preprocessor symbol table #endif. This terminates the range of an #if or #ifdef #include name. The file name, which should be a text file, is copied in place of this line. The file may contain other preprocessor directives, such as #include, causing other files to be copied in. In addition, the preprocessor finds every identifier in the source file and checks to see if its a define requiring expansion. The expansion is done without regard to the context of the define name, a fact that sometimes causes great confusion to the programmer. Shell variable names must be noticed by the preprocessor. For example, in a Unix Bourne shell, one can write
myvar=myname

Then a C preprocessor will consider name myvar to be defined as the string myname, just as though it appeared in a define like this:
#define myvar myname

We'll assume from here on that any program source file has been filtered by a preprocessor before the scanner operates on it.

Scanner
The scanner is responsible for reading what amounts to a long stream of bytes, i.e. ASCII characters, and breaking them into units called tokens. Here's a typical line from a Pascal program:
PROCEDURE proc (i : integer; VAR j : integer);

The scanner will break these into the following tokens, and also classify them:
PROCEDURE proc ( i : integer ; VAR j : integer ) ; a reserved word an identifier a reserved symbol an identifier a reserved symbol a reserved word a reserved symbol a reserved word an identifier a reserved symbol a reserved word a reserved symbol a reserved symbol

Part of the scanner's job is to skip over whitespace. These are usually ASCII space characters, tab characters, end-of-line characters and comments. Whitespace is used in a high-level language to separate tokens and generally to make the source code more readable. Sometimes you can write tokens together without using any separating whitespace. In other cases, Chapter 1: Writing Compilers and Interpreters, page 9

you can't. For example, if PROCEDURE and proc had no space between them, the scanner would assume that PROCEDURE proc was just an identifier. On the other hand, writing (i:integer with no whitespace is not ambiguous to the scanner, and is certainly more convenient to the programmer.

Parser and Code Generator


The parser is probably the most sophisticated and complicated part of an interpreter or compiler. Its task is to make sense of the stream of tokens supplied to it by the scanner. It essentially clumps together sequences of tokens into groups called clauses. The process is somewhat like that used in language classes, by which a sentence is decomposed into a subject and a predicate. These in turn can be broken down to locate a verb, nouns, adjectives, adverbs, and so forth. When we read a sentence from a book, we mentally break it down into these components or clumps, and that clumping process helps us to understand the meaning of the sentence. A compiler must similarly break down a large program into smaller pieces. For example, every C program consists of a set of large pieces called declarations and functions. Within a declaration clause, one might find subclauses corresponding to a class, struct, array, etc. Within a function clause, one might find subclauses corresponding to the function return type, function name, function parameters, function local variables, and function body. We'll study two widely used mechanisms of describing a computer language at the parser level, production rules and syntax diagrams. These are abstract ways of viewing the structure of a computer language. Unlike English and other natural languages, computer languages must have the property of no ambiguity. That means that a particular computer program, if free of syntax errors, may have exactly one well-defined meaning to the computer. (It's sometimes the case that that meaning isn't what the program's author intended, but it's the one that the computer will execute when the program is compiled and run.) It happens that we can also use the abstract descriptions (syntax diagrams, production rules) to systematically generate a parser, using a special software tool called a parser generator.

Chapter 1: Writing Compilers and Interpreters, page 10

Program simple.pas
Let's look at a small Pascal program, then examine its translation into 8086 symbolic assembler code. This little program doesn't do much of anything interesting, but it can be compiled and executed. Pascal will be our source language, so here's what a Pascal program looks like:
PROGRAM simple (output); VAR n : integer; x : real; PROCEDURE proc (i : integer; VAR j : integer); FUNCTION func (y : real) : real; BEGIN {func} j := 5; writeln('In func, the value of j should be 5, is', j:3); func := i + y + 0.5; END {func}; BEGIN {proc} j := i DIV 2; writeln('In proc, the value of j is', j:3); x := func(3.14); writeln('In proc, the value of j is', j:3); END {proc}; BEGIN {simple} n := 1; writeln('In simple, the value of n should be 1, is', n:3); proc(7, n); writeln('In simple, the value of n is', n:3, ' and the value of x is', x:8:4); END {simple}.

Well, perhaps this isn't so simple, after all. But let's trace through it to see how Pascal organizes things (it's similar to C, but has some differences), and also to appreciate the operations provided by Pascal. The program starts at the line
BEGIN {simple}

found near the end. A block of code is enclosed between begin . . . end just as C encloses code in { . . . }. Also, { . . . } in Pascal encloses a comment. So the {simple} part here is just a comment. The line
n := 1;

sets variable n to 1. The operator := is assignment. What is n, you ask? Very good question. Pascal, like C++, is strongly typed. So there has to be a declaration of the variable n somewhere. It's near the top, just past the VAR keyword:
n : integer;

So this stamps n as a variable with the type integer. It happens that an integer in this compiler is carried by a 32-bit twos-complement number, but that isn't demanded by the Pascal language, it just happens to be the way we've put this together. The next line (after the n := 1) is
writeln('In simple, the value of n should be 1, is', n:3);

This sends some text to the screen when the program is run. The "write" suggests that. The "ln" part

Chapter 1: Writing Compilers and Interpreters, page 11

of writeln causes a line feed to be sent at the end of this sequence of stuff. Literal strings in Pascal are enclosed in single quote marks, so the first parameter
'In simple, the value of n should be 1, is'

is just sent literally to the screen. Following that is variable n, and the :3 means it should be printed in a field width of 3 characters minimum. Since n was just set to 1, we expect to find the first line printed to be
In simple, the value of n should be 1, is 1

The extra spaces between is and 1 comes from the field width. The third line
proc(7, n);

is a procedure call. It works just like a function call in C. Two parameters are passed to this procedure, a literal 7, and our variable n. The procedure is declared in this section of source code:
PROCEDURE proc (i : integer; VAR j : integer); FUNCTION func (y : real) : real; BEGIN {func} j := 5; writeln('In func, the value of j should be 5, is', j:3); func := i + y + 0.5; END {func}; BEGIN {proc} j := i DIV 2; writeln('In proc, the value of j is', j:3); x := func(3.14); writeln('In proc, the value of j is', j:3); END; {proc}

Here's where Pascal looks peculiar compared to C. What is that FUNCTION definition doing between the PROCEDURE header and the procedure body (the BEGIN {proc}...END {proc} section)? It's a nested function. The novel idea here is that this function can only be called from within the proc body, and not from some other place. C doesn't support this, but C++ has a better idea in the notion of a class, with functions bound to their class. However, procedure proc starts with the line
j := i DIV 2;

This does an integer division of variable i by 2. Only the quotient is kept; the remainder is thrown away. Variable i is clearly the first parameter that was passed (the 7). So we expect j to be 3 in this. The next writeln
writeln('In proc, the value of j is', j:3);

should report this fact. Next we find the line


x := func(3.14);

This calls the function func, passing it parameter 3.14. A function is supposed to return a value, and the returned value will become the new value carried by variable x. We leave it to the reader to figure out where x is declared, and what type is returned by func. Function func doesn't do anything very interesting, except for the line
func := i + y + 0.5;

Notice that variable i is declared as an integer type, while y and func are declared real type. Reals and integers are fundamentally different kinds of numbers. Also, the 0.5 is clearly a floating-point

Chapter 1: Writing Compilers and Interpreters, page 12

number, i.e. a Pascal real. And note that while y is a parameter that belongs to function func, i is a parameter that belongs to the parent procedure proc. This also shows how Pascal functions return a value. The program should assign something to the name of the function. This name can't be used anywhere else (it can be confused with a function call), but it can be assigned-to. When the function exits, that value will be sent back to the caller as the return value of the function. Here, we use it to set variable x in the assignment statement
x := func(3.14);

Translation of simple.pas
We'll now look at a good translation of the Pascal program given above, simple.pas. We'll translate it to symbolic 80386 (or Intel Pentium) assembler, which is our target language. The assembler will also contain the source language statements, so we can see just how the translation is connected to the source statements. Assembler is very different from a high-level language. It reflects the organization of the electronic processor, and the way in which it is controlled. When writing assembler, we need to be aware that all variables need to be given addresses in random-access memory (RAM). Also, most operations (adding, subtracting, multiplying, etc.) require that variable values first be fetched from memory into registers. There are a small number of registers provided in the microprocessor, so the management of the registers is an important consideration. Finally, each operation must be spelled out in excruciating detail. Multiplication of two integers is rather different than multiplication of two floating-point numbers. If the sine of an angle is required, there must a subroutine available to perform that operation; few microprocessors have such functions "built in". We cannot explain all the details of this translation here. Instead we'll just point out some of the interesting things that are going on. Those who are familiar with the 80386 will be able to understand most of the operations. You'll find the details of assembler and the Microsoft assembler (MASM) operations in chapter 2. And, of course, just how this compiler generated the assembly given the Pascal source statements is the subject of this book. Our comments on the code are given in 10 point Italic. The generated code is in 10-point bold Courier. A general discussion of the translation and some conclusions are given after the program code.
; 1: PROGRAM simple (output);

Assembler comments start with semicolon and run to the end of the line
; Pascal program SIMPLE

The INCLUDE pulls in some serve functions from file aservice.asm


include aservice.asm ; 2: ; 3: VAR ; 4: n : integer;

.DATA announces that the following material goes into a special data section of memory
.DATA

This is how a variable is declared in assembler. The Pascal name "n" becomes an extended name "N_038". An integer is considered a "double word", a 32-bit integer. It requires the "dd" operation, which stands for "declare double". The 0 is an initial value.
N_038 dd ; 5: X_039 dd 0 x : real; 0.000000

Here, variable X is declared as a double-word (32-bit) float.

Chapter 1: Writing Compilers and Interpreters, page 13

; ; ; ;

6: 7: 8: 9:

PROCEDURE proc (i : integer; VAR j : integer); FUNCTION func (y : real) : real;

Internal variables such as i, j, y, and the function return value have a memory address determined at runtime, by way of an offset from register EBP.
I_040 EQU J_041 EQU ; 10: ; 11: FUNC_043 Y_044 EQU <[ebp+16]> <[ebp+12]> BEGIN {func} EQU <[ebp+16]> <[ebp+12]>

.CODE announces that what follows must be in the code segment of memory, separate from the data segment
.CODE

This is how a procedure opens in assembler


FUNC_045 push push mov mov proc near stlink+12 ebp ebp,esp stlink+12,ebp

The push instructions save some values on a runtime stack

Here is how a simple assignment looks in assembler. It's complicated by the fact that variable j is not local to this function; it belongs to an enclosing function. We need an "uplevel reference", discussed in chapter 13.
; 12: mov mov mov mov mov j := 5; eax,5 ebp,stlink+8 ebx,[ebp+12] ebp,stlink+12 [ebx],eax

Here's how an output statement is coded. Each of the separate fields in the writeln results in a separate IO function call.
; 13: push writeln('In func, the value of j should be 5, is', j:3); dword ptr 1

This gets the string 'In func, ...' printed The string needs to reside somewhere in memory; we put it in the data segment like this, and it's labeled L1.
.DATA L1 db db .CODE lea call push 39 "In func, the value of j should be 5, is",0

The lea instruction gets the string address in register EBX so that pushString can deal with it
ebx,L1 pushString dword ptr 0

This is a macro that turns into a function call. It writes the string to stdout
writesA add esp,8

This starts the writing of the variable j


push mov mov dword ptr 1 ebp,stlink+8 ebx,[ebp+12]

Chapter 1: Writing Compilers and Interpreters, page 14

mov eax,[ebx] mov ebp,stlink+12 push eax mov eax,3 push eax writeiA add esp,12

This writes a line feed after the string and number are written
push dword ptr 1 writelnA add esp,4 ; 14: func := i + y + 0.5; mov ebp,stlink+8

This fetches variable i, a 32-bit integer


mov mov mov fild fld eax,[ebp+16] ebp,stlink+12 rtmp,eax rtmp dword ptr [ebp+12]

This loads variable i into the floating-point unit (FPU) This loads variable y into the FPU This adds i and y
fadd

This is how 0.5 looks as a floating-point number. The compiler has produced a 32-bit number in hex format.
.DATA L2 dd .CODE fld fadd 03f000000h

This loads the 0.5 in the FPU


L2

This adds 0.5 to (i+y) This saves the result in variable func
fstp ; 15: fld mov pop pop ret FUNC_045 dword ptr [ebp+16] END {func}; dword ptr [ebp+16] esp,ebp ebp stlink+12 8 endp

Preparing to return from this function

This returns from the function

The compiler prints a symbol table at the end of each function--this one only describes one variable: y
; ----- SYMBOL TABLE ----; Y REAL ; 16:

This code is for procedure PROC.


; 17: PROC_042 BEGIN {proc} proc near

We need to save two things by pushing them onto the runtime stack. They will be popped off just

Chapter 1: Writing Compilers and Interpreters, page 15

before returning.
push push mov mov stlink+8 ebp ebp,esp stlink+8,ebp

Here's an integer division. Integers are 32-bit things. We use the 32-bit registers EAX and ECX for the arithmetic.
; 18: mov push mov pop cdq idiv mov mov ; 19: push j := i DIV 2; eax,2 eax eax,[ebp+16] ecx

Here's the integer division.


ecx ebx,[ebp+12] [ebx],eax writeln('In proc, the value of j is', j:3); dword ptr 1

Here's where we write a message to stdout. It's compiled just like the one in func
.DATA L3 db 26 db "In proc, the value of j is",0 .CODE lea ebx,L3 call pushString push dword ptr 0 writesA add esp,8 push dword ptr 1 mov ebx,[ebp+12] mov eax,[ebx] push eax mov eax,3 push eax writeiA add esp,12 push dword ptr 1 writelnA add esp,4

Here's how FUNC is called with its parameter


; 20: sub push call x := func(3.14); esp,4 dword ptr 04048f5c3h FUNC_045

The parameter is converted to a 32-bit number and pushed in the runtime stack The function is called. The parameter 3.14 will be popped from the stack before returning The function returns a floating-point number, which is left in the FPU. So we just need an fstp to save it in variable X.
fstp ; 21: push lea X_039 writeln('In proc, the value of j is', j:3); dword ptr 1 ebx,L3

Another message to stdout

Chapter 1: Writing Compilers and Interpreters, page 16

call pushString push dword ptr 0 writesA add esp,8 push dword ptr 1 mov ebx,[ebp+12] mov eax,[ebx] push eax mov eax,3 push eax writeiA add esp,12 push dword ptr 1 writelnA add esp,4 ; 22: END {proc}; mov esp,ebp pop ebp pop stlink+8 ret 8 PROC_042 endp ; ; ; ; ; ----- SYMBOL TABLE ----FUNC FUNCTION I INTEGER J INTEGER 23:

Here is where the main program starts. This will be called from the operating system
; 24: BEGIN {simple} public _pasMain _pasMain proc near push stlink+4 push ebp mov eax,ds mov es,eax

A simple assignment like this is converted into a single instruction


; 25: mov ; 26: n := 1; N_038,1

Another message to stdout


writeln('In simple, the value of n should be 1, is', n:3); push dword ptr 1 .DATA L4 db 41 db "In simple, the value of n should be 1, is",0 .CODE lea ebx,L4 call pushString push dword ptr 0 writesA add esp,8 push dword ptr 1 push N_038 mov eax,3 push eax writeiA add esp,12

Chapter 1: Writing Compilers and Interpreters, page 17

push dword ptr 1 writelnA add esp,4

Calling PROC is similar to calling FUNC, except that two parameters are passed by pushing them in the runtime stack.
; 27: proc(7, n); push dword ptr 7 lea ebx,N_038 push ebx call PROC_042 writeln('In simple, the value of n is', n:3, ' and the value of x is', x:8:4); push dword ptr 1

Another message to stdout


; 28: ; 29:

.DATA L5 db 28 db "In simple, the value of n is",0 .CODE lea ebx,L5 call pushString push dword ptr 0 writesA add esp,8 push dword ptr 1 push N_038 mov eax,3 push eax writeiA add esp,12 push dword ptr 1 .DATA L6 db 22 db " and the value of x is",0 .CODE lea ebx,L6 call pushString push dword ptr 0 writesA add esp,8 push dword ptr 1 push X_039 mov eax,8 push eax mov eax,4 push eax writefA add esp,16 push dword ptr 1 writelnA add esp,4

Here's the end of the main program. Register EBP is restored


; 30: END {simple}. pop ebp pop stlink+4

The RET instruction returns to the caller of this function


ret

Chapter 1: Writing Compilers and Interpreters, page 18

_pasMain

endp

The global symbol table contains a lot of predefined Pascal functions. It's abbreviated here
; ; ; ; ; ; ----- SYMBOL TABLE ----ABS PROCEDURE ARCTAN PROCEDURE ASSIGN PROCEDURE CHR PROCEDURE (etc.)

Observations
If the reader has followed through our translation commentary carefully, and has at least partially grasped the assembler operations coded therein, he/she will appreciate the following observations. Verbosity. The number of assembler lines (264) is considerably larger than the number of Pascal lines (30). Of course, some of the Pascal lines are blank, and some of the assembler lines are in the form of a symbol table listing. However, on the average, one can expect one Pascal line to turn into 5 to 10 assembler lines. Having all that source code generated by an automatic program is clearly a blessing, provided, of course, that the assembler is generated correctly. Underlying Complexity of the Source Language. In Pascal, once we have declared types (integer, real, Boolean, etc.) for various variables, we can use arithmetic and logical operations on them quite blithely, trusting that the compiler will do the "right thing". Yet the "right thing" turns out to be rather complex at the assembler level. If an integer and a real must be operated on, then the integer must first be converted to a real. The compiler cannot afford to ignore the types of the variables; it must pay careful attention to them, remembering many details about each variable. Complexity of the Target Machine. Compared to the smoothly mathematical nature of modern programming languages, with their algebraic notation and use of objects and functions, almost every CPU and microprocessor has a very complex and "bumpy" organization. Microprocessors are designed around instruction sets, which provide very basic unit operations on memory or registers. These need to be combined with certain internal resources supported by the micro, such as a stack, byte-oriented memory, some primitive function call instructions, and more. Attention must continually be paid to the limitations of the processor, for example: only certain registers are available only eight floats can be pushed into the FPU stack integer division requires the use of particular registers only certain addressing modes are provided. And there's much more. An instruction set at best is designed to support certain language environments very well, and others poorly at best. The challenge for the compiler designer is to find an optimal sequence of instructions or library functions to carry out the required high-level operations. Lack of Processor Resources. In Pascal, we can blithely declare arrays, records, and as many variables as we like, with little or no concern as to whether they are a limited resource. At the assembler level, we are forced to work with a limited set of registers, for example, EAX and one or two others. Words and double words in memory must be fetched into registers. We need to consider temporary locations for things, using the runtime stack for that. Chapter 1: Writing Compilers and Interpreters, page 19

Complexity of Function calls. Pascal and C functions are easily declared and called, provided that we think about the necessary types. Assembler imposes more of a strait-jacket. Attention must be paid to just how the parameters are passed and later accessed. Setup and tear-down code is required to maintain the integrity of the runtime stack. Space for local variables must be allocated from the runtime stack, and later released. Need for High Performance. If a compiler can't deliver assembler code that is reasonably minimal in size and in choice of operations, then a programmer might be tempted to write in assembler directly. The compiler used for the above compilation is reasonably good at optimizing assembly, but there is clearly room for improvement. For example, look at the way that func returns. The return value is first stuffed into the return value location, then pulled back out and pushed into the FPU stack. Why not just stuff it in the FPU stack in the first place? One can also notice places in which a number is pushed in the runtime stack, then shortly afterward popped. Couldn't it be saved in another register in the meantime? Such little details represent compiler design compromises, and they can be surprisingly difficult to optimize. Yet they can also add up to a significant loss in performance. Need for Absolute Perfection. A bug in compiled code that arises from a compiler bug is perhaps the worst calamity for a program developer that can be imagined. Developers usually assume that their compiler will generate correct code, and will spend a lot of time hunting for their bug when it's really the compiler's bug. Also, many programmers are incapable of tracking a bug down to the assembler level; they depend instead on a high-level symbolic debugger.

The Burden of the Compiler Designer.


Designing and implementing an industrial-quality compiler for even a simple programming language is a large task. As we've pointed out above, the designer must understand the source language, assembler, and the target machine very well. He/she should also be skillful in the host language and its environment. In addition, we will describe some powerful tools that facilitate compiler design. These don't behave in an intuitive way, at least not until you acquire some practice with them. Finally, a language designer, someone charged with inventing a feature set and the syntax for a wholly new language, should be a highly skilled professional. He/she should have a deep understanding of the properties of computer languages, and their description in the form of production rules or syntax diagrams.

A Brief Glossary
Here's a list of terms that we'll use in this book. It's important to pay attention to these definitions and refer to them from time to time, as they don't necessarily agree with a dictionary definition. Our need for precision in definition stems from our need to discuss several subtle algorithms involving the translation of a source file to a target file. Character: any ASCII symbol. In the US, coded as a number between 0 and 255. Control Character: an ASCII symbol between 0 and 31 (decimal). Includes tab, end-of-line, bell, form feed and more. Printable Character: ASCII symbol between and ~. Includes digits, letters, space, and these special characters ~`!@#$%^&*()_-+=|\{}[]:;<>,.?/

Chapter 1: Writing Compilers and Interpreters, page 20

Letter: one of the ASCII letters, i.e. abcdefxyz ABCDEFXYZ Digit: one of the ASCII number symbols, i.e. 0123456789 Integer: sequence of digits representing a numeric value. A short integer (C convention) is in the range 32768 through 32767. A long integer is in the range 231 through 231 1. Floating-point number: a number with a mantissa and an exponent. Represented internally as a 32bit float or a 64-bit double. In the IEEE representation, a float has an 8-bit exponent e and a 23-bit mantissa m. One bit is the sign s. The value of such a number is defined as s (1+m2-23) 2e-127 Here, e and m are interpreted as unsigned integers from the exponent and mantissa bits. For example, if the exponent bits are 01110101, then e is 75 (octal) or 61 decimal. The number is packed in 32 bits of memory as follows: sign (1 exponent (8 bits) mantissa (23 bits) bit) String: a sequence of characters. Represented internally as a char array. In C, a string is represented internally as a sequence of chars organized by increasing address, in which the final character has the value 0, i.e. \0. A C unsigned character can carry any value in the range 0..255. Literal: applied to a string or number, refers to a specific value. In C, an example of a literal number is 15.76E-3. A literal string is "Here is my string!!". Identifier: a name used in a programming language that stands for something. In C, Pascal, C++, Java, Ada, identifiers may start with a letter, and may contain any number of letters, digits or underscore. Token: a string drawn from the source file that represents a unit object from the point of view of the parser. Whitespace: a string drawn from the source file that can be skipped over by the scanner. It means nothing to the parser. Typical whitespace in a programming language are spaces, tabs, line endings, and comments. Scanner: that part of a compiler or translator that collects characters into tokens, also skipping whitespace. Parser: that part of a compiler that collects tokens into structured clauses, so that the clauses may efficiently be converted into assembler or executed. Source language: the language accepted as input by a compiler or interpreter, for example, Pascal. Target language: the language generated by a compiler, usually assembler or absolute machine binary. Host language: the language in which the compiler or interpreter is written, for example, C. Sentence: a programming statement described in character form in some source language. For example, a complete Pascal program. Syntax: the rules of forming a sentence in a source language that govern whether the sentence is correctly stated or not. For example, a WHILE in Pascal must be followed by an expression, then a statement. The syntax rules of a language are generally expressed as production rules. Semantics: the rules that govern how the components of a source language will behave when executed. For example, a WHILE in Pascal is executed by repeatedly evaluating and testing the expression, then if its TRUE, executing the statement. Interpreter: a program that accepts a source language sentence, and executes all of its operations directly without producing any intermediate file. Assembly language: symbolic instructions that closely mimic a particular CPUs architecture.

Chapter 1: Writing Compilers and Interpreters, page 21

Usually one line of assembly language turns into a single instruction. The language may also provide macros, symbolic branch labels, and other assistance. For example, MOV AX,15H causes the literal value 0x15 to be transferred into the AX (16 bit) register, on an 80x86 CPU. Assembler: A program that accepts assembly language, and produces a binary machine module file. Compiler: A program that accepts a source language sentence, then generates an intermediate language sentence. Usually, the new language sentence is an assembler program, or machine binary. Linking loader: A program that accepts several modules produced by a compiler and/or assembler, then interconnects their internal references to form a complete working program.

ASCII Table
Table A.1 below lists the first 128 ASCII characters. These are the characters that we expect to appear in a source language, assuming that the source is in English. The group of characters between 00 and 31 (decimal) are called control characters. Most of these should never appear in the source language. Exceptions are character 09 (tab), 10 (line feed) and 13 (carriage return). Character 00 has a reserved meaning in C/C++ programs as a string terminator, but it has that role internally during execution of a program; the 00 character should never appear in a source file. All the characters between 32 and 126, inclusive, are more-or-less valid in a source program file. However, not all the characters have meaning within any particular programming language. For example, the #, @ and & characters have no purpose in Pascal (there are a few others). While a source file may contain these, the scanner is responsible for detecting these and removing them, with a warning message. We will use character 04 as an end-of-file designator. This was an arbitrary choice, but it has no role otherwise in a source program. This character will not appear in any source file; it's merely inserted at the end of a file for the sake of the scanner algorithm. Since a byte can carry any value from 0 to 255, the characters from 127 to 255 inclusive also play no role in English programming source languages. Some of these represent special characters found in foreign alphabets, and may be used in comments or other programming variations. We will assume that they have no role in our source programs.

Line-feed and Carriage Return


The line-feed and carriage-return characters (10, 13) will appear in a source file. How they are used depends on the operating system. In Unix, a standard text file is expected to have a single line-feed at the end of each text line. However, most Unix tools now accept (and ignore) a carriage-return character. MSDOS and Windows text files normally have a carriage-return: line-feed pair at the end of each text line. Some DOS tools will fail if the carriage return is missing; others don't care. These text file differences can cause some grief for you if you blindly copy a DOS file to Unix or vice versa. For example, a Solaris Unix workstation is usually able to read and write MSDOS formatted diskettes. But the access software won't necessarily convert text files from one form to the other.

ftp
By using ftp (file transfer protocol) over a network to transfer a text file, you can have this text file translation performed for you. Just choose asc mode when transferring a text file, and ftp will make sure the file is copied with the appropriate line endings. If you choose bin mode instead, the file will be copied exactly as-is, with no translation of line endings. The SJSU computer engineering labs require that you use sftp instead, i.e. Secure Shell ftp. Refer to Chapter 1: Writing Compilers and Interpreters, page 22

the online users guide for more information. This will translate DOS files to Unix and vice versa, given the right configuration.

Carriage Return in vi Editor


You can view the extra carriage returns in the vi editor. It appears there as ^M at the end of each line. If you want to remove these while in vi, type the following:
:1,$s/^V^M//

where ^V is control-V, and ^M is control-M. The colon (:) puts vi in a command mode, similar to that used in the editor ex. The 1,$ says perform the following command on the first through the last line, inclusive. The s says to substitute one string for another one: The first string is between the first and second slash (/), and the second string is between the second and third slash. So the first string is control-V control-M, and the second string is empty. The first string should be the carriage-return character (control-M) by itself. Unfortunately, carriage-return is what the Enter key sends, and vi considers Enter to end this command. By preceding control-M with control-V, we suppress this unwanted termination.

tounix
A better way to convert a DOS text file to Unix form is by calling the utility tounix. This takes any number of text files and converts each of them from DOS form to Unix form. If the file is already in Unix form, it does nothing. It also does nothing if it decides the file is in binary form (has nonprintable characters). This is not a standard Unix tool. It won't be found outside the SJSU computer engineering Unix lab. A similar tool, todos, will convert a Unix text file to DOS format. Use it if you need to transfer a Unix text file to a disk for reading in a Windows computer.

Chapter 1: Writing Compilers and Interpreters, page 23

Chapter 1: Writing Compilers and Interpreters, page 24

Chapter 2: Regular Expressions and Finite State Machines


W. A. Barrett, San Jose State University nch2.doc, vs 2

Introduction
A computer program can be regarded as one long sequence of ASCII characters. This sequence will obviously contain letters, digits and other visible special characters, such as the ones above the numbers on your keyboard. It will also, in general, contain space characters, tab characters, line feed characters and carriage return characters. Given such a sequence, how can we break it down into something intelligible, that can be interpreted as a C++, Pascal, assembler, or other high-level program? Lets start by considering a source program as a string (an abstract string, not the ASCII STL string type) of characters. We then propose that a valid program (with no syntax errors) is a subset of all possible strings of ASCII characters. But thats essentially an infinite set of strings. How can we design a finite algorithm that can somehow filter out all the unwanted sequences from all the possible sequences? Put another way, our algorithm should operate on some given string (a program source file), then announce whether or not that program belongs to a subset that belongs to the subset that we want. This is essentially the parsing problem. An algorithm that can separate wanted from unwanted string subsets is called a parser. Given a parser, we say that it accepts or selects a string if it can examine the string and report that it belongs to the specified subset of its language. Of course, we want more than just a stamp of approval of a string; we want something useful to come out of the algorithm, e.g. a source assembler file that expresses the same idea expressed in the high-level language file. That is the semantics problem uncovering the meaning of our sequence of characters, and providing something useful as a result. Clearly, the parsing problem must be solved to do this. We will see that after weve developed a parser, it can also be used to extract meaning from the string, permitting us to generate useful semantics. We will approach the subject of computer translation by examining a particularly simple, yet powerful, way of describing a subset of character sequences. This is the regular expression. Regular expressions provide a simple way of classifying strings as belonging to a wanted subset or not. We will use them as a means of finding the tokens of our high-level language. Regular expressions have many applications in modern computer systems. Many Unix text tools use regular expressions, or some variation of them, to search for strings in files. For example, if you need to look for all the strings in a file that start with letter s followed by one or more other letters, you would write
grep E s[a-zA-Z]+ filename

The E is needed to tell grep to interpret the following string as a regular expression. This regular expression says to consider letter s followed by one or more (the +) of the preceding thing (in the brackets) as something to find in the file.

Regular Expressions
A regular expression is made up of a combination of these elements: Chapter 2: Regular Expressions and Finite State Machines page 25

characters: each ASCII character is a regular expression. Some characters are used as operators as explained later, and these must be escaped with a backslash character. empty character: the empty character is a regular expression. You can often avoid using this in a regular expression. concatenation: If r1 and r2 are regular expressions, so is r1 r2. We'll usually just omit the concatenation symbol by just writing r1 followed by r2. escaped meta-character: When any of the metacharacters { * | ( ) + ? \ [ ] } are needed as a literal character, we prefix it with backslash, like this:
* ( \

\* represents \( represents \\ represents

parenthesized regular expression: If r is a regular expression, so is (r) alternative: If r1 and r2 are regular expressions, so is r1|r2. This represents a choice of r1 OR r2. closure: If r is a regular expression, so is r*. This stands for 0 or more concatenations of r. Zero concatenations is an empty character. non-empty closure: If r is a regular expression, so is r+. This stands for 1 or more concatenations of r. This is a shorthand for the expression r r*. any one character of: [abcde] represents any one of the characters a, b, c, d, e. This is a shorthand form for the regular expression (a|b|c|d|e). Also, [b-m] represents any one of the characters b, c, d, e, f, g, h, i, j, k, l, m. The dash character "-" can appear between any two ASCII characters; all of the included characters (as listed in the ASCII table) are part of the choice set. choice: If r is a regular expression, so is r?. This stands for the empty character or the regular expression r. r? is a shorthand form of ( r | ).

Precedence
( ) can be used freely to establish precedence. *, ?, + has the highest precedence (like ++ or -- in C). It binds the most tightly to its preceding character or regular expression. Thus abc* means ab(c*), where represents the concatenation operator. concatenation has intermediate precedence. | has the lowest precedence, and therefore binds the least tightly. Thus abc|d*ef means (abc)|((d*)ef).

Example 1
Consider this regular expression:
[a-zA-Z_][a-zA-Z0-9_]*

It stands for a C or Pascal identifier. This selects any string that starts with an upper or lower-case letter, or underbar, followed by zero-or-more letters, digits or underbars. Here are some strings that are accepted by this regular expression:
K while

Chapter 2: Regular Expressions and Finite State Machines page 26

for claim AaBbCcDd9876543210 __my_Name_ a12345

Notice that there is no limit on the number of characters in a string accepted by this regular expression. That applies to any regular expression that contains a + or a * operator, clearly. Here are some strings that are not accepted by this regular expression (why not?):
5abc abc+rt

Example 2
(\+|-)?[0-9]+

stands for the possibly-signed integers. To break this down into its constituents, you need to pay close attention to the precedence rules. The ? operator clearly applies to the parenthesized expression
\+|-

The backslash preceding "+" clearly means that the "+" character is not an operator, but a valid terminal character. Thus this expression means "either + or -". With the ? operator applying to it, it means "either + or - or nothing". We therefore have an optional sign preceding the rest of the expression. There are no | operators, so this is just a concatenation of these two expressions:
(\+|-)? [0-9]+

The subexpression, [0-9], is a choice of one of the characters 0, 1, 2, 3, ... , 8, 9. In fact, it's equivalent to the regular expression
(0|1|2|3|4|5|6|7|8|9)

It's clear that having this shorthand form is a great savings in typing. Try writing out the equivalent of
[a-zA-Z]

and you'll see why. The complete expression, [0-9]+, is clearly the + operator applied to the choice of digits. This stands for one or more digits in the range of 0 to 9. We can now describe this regular expression: it's an optionally-signed decimal integer. The sign clearly belongs in front; it can't follow one of the digits. Also, the number must contain at least one decimal integer. It can't be just a sign. Here are some strings accepted by this regular expression:
+1 -15278239847 2256

Here are some strings that are not accepted by this regular expression (why not?):
+15-16 ++22 325a6

Chapter 2: Regular Expressions and Finite State Machines page 27

Example 3
(\+|-|)(([0-9]+.[0-9]*)|([0-9]*.[0-9]+))

stands for the possibly-signed decimal numbers, i.e. numbers with a decimal point. The decimal point may either be embedded in the digits, or precede them, or follow them. We've added more parentheses than are necessary to help avoid confusion. This r.e. breaks down into this expression:
(\+|-|)

followed by this expression:


(([0-9]+.[0-9]*)|([0-9]*.[0-9]+))

Regarding the first expression, weve seen that \+ is just the + character. The backslash is required to avoid treating the + as the one-or-more operator. The - character stands for itself. The two vertical bar characters | are alternation operators. The symbol stands for an empty string. An empty string consists of no characters at all. So we can interpret the first expression as a single + character, or a single - character, or nothing. Now consider the second expression. We can remove its outer parentheses, clearly:
([0-9]+.[0-9]*)|([0-9]*.[0-9]+)

We see that this is an alternation of these two regular expressions, i.e. one or the other of these:
[0-9]+.[0-9]* [0-9]*.[0-9]+

Here are some strings accepted by this r.e.:


-45.16 +.3 3.14159 +165.

Here are some strings that are not accepted by this r.e. (why not?):
45+16 .1.5 13.+ .

Example 4
(E|e)(\+|-)?[0-9]+

stands for the following concatenated expressions:


(E|e) (\+|-)? [0-9]+

You should be able to see that this represents the exponent part of a floating-point number, which starts with letter E or e, has an optional sign, then an integer. Here are some strings that are selected by this regular expression:
e-5 E5 e4

Here are some strings that are not selected by this regular expression:

Chapter 2: Regular Expressions and Finite State Machines page 28

+16 22 E5e6 E+-6

Example 5
Here is a regular expression that consists of example 3 followed by zero-or-one of example 4:
(\+|-|)(([0-9]+.[0-9]*)|([0-9]*.[0-9]+))((E|e)(\+|-)?[0-9]+)?

This now represents any signed decimal integer or floating-point number. It precisely separates all those belonging to this category from those that dont. Here are some accepted strings:
+45.16E-5 1 .16 25. 5E5 .6e-6

...and here are some that are not accepted:


. +16-5 3E1615E5.6

Regular Expression as a Generator of Sentences


It should be clear that a regular expression can stand for a large number of different terminal expressions. We can expand a regular expression into a particular sentence by following the operators carefully, making choices where necessary. The principal trick is in finding the operators and the operands on which they operate. Adding parentheses to a complicated expression is one way of doing this. Consider this expression for a decimal number described previously (example 3),
(\+|-|)(([0-9]+.[0-9]*)|([0-9]*.[0-9]+))

We can easily generate a particular sentence from this by making particular choices for each of the operators. When we find a * operator, for example, we can choose to expand its operand (say) 3 times. When we see a choice operator (?), we can choose to expand its operand or not. When we see an alternation or a [..] operator, we can choose one of its components. Consider the first component,
(\+|-|)

Let's choose -, so our string starts with this character:


-

The next component is a choice of two subexpressions. Lets choose the second one, removing the outer parentheses:
[0-9]*.[0-9]+

This is clearly a concatenation of three subexpressions:


[0-9]* . [0-9]+

The first one is zero-or-more digits. Lets choose two of them, like this:
07

The second regular expression is just the character .:


.

Chapter 2: Regular Expressions and Finite State Machines page 29

The last regular expression expects one or more digits. Lets choose 3:
054

Concatenating these, we have the generated string


-07.054

Finite State Machines


A finite state machine or FSM is a set of states connected by transitions. (See figure 1). This is an abstract model of the behavior of certain kinds of electronic or software systems. Through such a model, we can understand just how the system behaves, can design new systems, and can gain confidence that our systems will behave as we expect them to. Essentially, we may consider an FSM to provide a sequence of operations on inputs. The inputs may be binary stimuli to an electronic computer, characters read from a tape, or characters supplied through a keyboard. An FSM can be used to describe the behavior of some interconnected flip-flop registers in an electronic binary computer. It can also be used to model a certain class of software tools. We plan to use the FSM model to represent a certain class of translator, which will turn out to be exactly the set of strings accepted by regular expressions. By organizing an FSM properly as a software program, we can arrange that it accept a sequence of input characters, i.e. a string, check that string for correct syntax construction, and, finally, generate some new output in response to the input string. That's essentially the function of a translator. It'll find a place in a more general compiler as a front-end lexical analyzer, designed to find and isolate small tokens in a programming language, for example, strings, identifiers, numbers, and other special tokens. Before we launch a theory of operation, reduction and construction, let's examine a typical FSM.

Graph Form
A FSM can be described by a graph similar to the one shown in figure 1. Each circle represents a state. We've given names to each of the statesthe letters S, A, B, etc. How the states are named isn't at all important, but we clearly want the states to be uniquely named, i.e. two states should not have the same name. One state is specialthe start state, which is state S in figure 1. You can think of the FSM as a clockwork mechanism that, when d operated, can only be in one of its states at B any one moment. As time progresses, it + d may jump from one state to another, but d d only along one of the paths directed out of d . S A C D F the current state. It's started in its start state (S), and then proceeds through various other states until it reaches a halt . d E G H state, which is state F in our example, d indicated by the double circle around the F. d This particular machine therefore has a finite operational lifetime, as well, in moving from S to F. When it reaches F, it must be restarted as a new invocation. One might also provide a transition back to S, so that the machine could operate forever. However, our machines won't operate quite that way. The state jumps" must occur according to the arrows in the diagram, which represent the transitions Fig. 1. An FSM

Chapter 2: Regular Expressions and Finite State Machines page 30

of the FSM. Each transition is marked with a condition that permits the transition. Since we are using an FSM as a translator, the transition conditions will be characters drawn from some input string. If the FSM is in a certain state, for example, C, and the next input character in the string is a "d", then the FSM can only transfer back to state C. If the next character is a period "." instead, then it must transfer to state D. In the transition, the input character is said to be accepted or read. We expect to see the character following that one on the next transition. The symbol stands for "empty symbol", or "no symbol". On an empty symbol, the FSM is permitted to transfer to the state regardless of the input character. It also does not advance past the next input character. You'll notice that in some states, for example state A, that the FSM can transfer to either of two or more different states on the same character. If it's in state A, it can transfer to state B or to state C on character "d". It can also take the empty move to state E, and thence accept character "d". An FSM can have more than one halt state. Our example above shows just one halt state, but we will develop other machines that have several halt states. An FSM can also have moves out of a halt state. Just because it happens to reach a halt state doesn't mean that it must stop there. If it can continue through another transition, it's permitted to. It can only have one start state, however.

Multiple Choices
The possibility of multiple choices from some states means that this FSM is non-deterministic. Essentially, you can throw a die to decide which way to move from some of the states. From state A, given character "d", you have three choices, to state B, to state C, or to state E. You'll also notice that in certain other states, G for instance, that there's only one way to proceed. For example, if it's in state G, then the next character must be a "d". If the next character happens to be something else in that state, then we say that the FSM blocks. When an FSM is blocked along some path, it's important to see if some other path will accept the string. Only if no path can be found from the start state to a halt state, scanning the whole input string, can we claim that the input string cannot be accepted and therefore has a syntax error.

An Acceptable String
For example, let's try the string
d.d

Starting at S, we clearly must take the empty transition to A. Suppose we choose the upper path to B. That consumes the first "d" character, leaving us with the remaining string
.d

We're clearly forced to take the empty move to F. But there are no moves out of F, and we have some characters left over. So that's not an acceptable path. Going back to state A, let's try the middle path. The move to C consumes the first "d" character nicely, giving us the remaining string
.d

We have a "." move to state D, so we take it, giving us the remaining string
d

There's a "d" move from D to itself, so we take it, giving us an empty remaining string. But the empty move to F is still OK. That takes us to a halt state, with no remaining string. This is a legal path, so we can say that the string d.d is accepted by this FSM.

Chapter 2: Regular Expressions and Finite State Machines page 31

An Unacceptable String
Let's try an unacceptable string:
+.d.

Starting at S, we take the "+" transition to A, yielding the remainder


.d.

We must take the empty transition to E. We must then take the "." transition to G, yielding the remainder
d.

We must take the "d" transition to H, yielding the remainder


.

Now the FSM is blocked. The "d" transition from H to itself can't be taken, because we don't have a d next. If we take the empty transition to F, there's no move out of F on anything. There are no other possible paths, so we conclude that our FSM cannot accept this string. This string has a syntax error at the second "." character. In general, if we try all possible paths on a string, and can only reach as far as the nth character, and are unable to scan past that character in any of the paths, then we say that the first syntax error occurs on the nth character.

FSM as a Syntax Checker


You can now see how an FSM can serve as a syntax checker. It's able to work through certain input strings, but not others. The strings that it can work through somehow, by choosing just the right path, from the start state S through to a halt state F, are said to be accepted. All other strings are not accepted. Those strings that are accepted are said to form the language accepted by the FSM. We can also say that an accepted string is a sentence in the language of the FSM. Incidentally, this FSM accepts numbers containing a decimal point, and with an optional sign. The character "d" stands for any decimal digit 0, 1, 2, ..., 8, 9. So the FSM accepts each of these strings:
+dd.ddd d -d dddd +.dd -ddd.

However, it will not accept any of these strings (try showing that every possible path fails on each string):
+dd+ -dd.dd. . +..

You can now see how this FSM very nicely selects legal numbers and rejects illegal ones. That's the pattern for all the translators and syntax checkers that we will be developing in this text, i.e. finding a suitable machine model, then learning how to construct one that exactly accepts some language that we have in mind.

Table Form
We can also express an FSM as a transition table. An example is given below. This expresses the same FSM as in the graph in figure 1.
S + A A . d

Chapter 2: Regular Expressions and Finite State Machines page 32

A B C D E (F) G H

D G

B,C B C D E H H

E F F

We list the states down the left-hand column under the . In the top row, we list each of the different characters that appear on the transitions in the machine, i.e. +, -, . d and . The body of the table contains blanks or states. Here's how to read the table. Begin with the start state S. From the table, there's a transition on character "+" to state A. There's also a transition on character "-" to state A. And, there's a transition on the empty character to state A. These clearly correspond to the left part of the state diagram. This describes the "S" row in the table. There are two empty boxes in the S row. These represent transitions that are disallowed. If the FSM is in state S, and the next character is a period ".", there's no move permitted. Similarly on character "d". (In fact, the transition could be taken on any character, including + and -). Each row can be interpreted in the same way. Choose a row, for example, C. Find state C in the graph, then examine the transitions out of C to the other states. They should agree with the ones written into the table. Notice that one table entry contains two states. In row A, on a d" character, we have a transition to either B or C, so we write both of them into that table slot. We need to think of several different states in the same slot as a set of states. The same state shouldn't be written twice in the same slot. Don't write A,B,A somewhere in the table; this is the same as A,B. Also the column for the moves is important. At this point in our development, there's no reason to treat empty moves any differently than the others. Halt states are shown in the table like this:
(F)

That's to suggest the double circle drawn around a halt state in the FSM graph.

Definitions
We need to review some definitions of terms that we've introduced above. State: Describes the current situation of an FSM. An FSM is considered to be in just one state at any particular moment, but will transfer from one state to another in response to input stimuli. Finite states: The number of possible states is finite and fixed. Character: A fancy name for the input characters. In our application, an FSM will be used to recognize the characters of an input language like C or Pascal in preparation for a more powerful parser. Its input will be a sequence of characters. String: An ordered sequence of characters. Sentence: A sequence of characters presented as in the input character sequence, or stimulus, for an FSM. Transition: A change from one state to another state, usually associated with one specific character or the empty character. Transition character: That character which permits a particular transition from one state to another. Empty Character: A name given to "no character": . It is used in an FSM to describe a transition that can be taken without consulting the next character in the input string. Chapter 2: Regular Expressions and Finite State Machines page 33

Start state: The initial state of a machine. Halt state: A terminating state. When the machine is in a halt state, and there are no more characters to be scanned, it is said to accept the input character stream. There may be more than one of these; there must be at least one. More state transitions may occur from a halt state if there are more characters. Finite state machine: A program or automaton that can be described in terms of a finite number of states and a set of state transitions.

Formal Definition of an FSM


From a language point of view, a FSM will scan a particular class of sentences, forming its language, as follows: The FSM is initially in its start state. On each character in the input sentence, the FSM is expected to undergo a transition on that character to another state. (There may be no such transition) When the end of the sentence is reached, the FSM should be in a halt state. The sentence is then said to be accepted or recognized by the FSM. The FSM will fail to accept a sentence in either of these ways: By not having a state transition on the next character. By not falling into a halt state upon scanning the last character of the sentence. A sentence is accepted by a nondeterministic FSM if some path that accepts the sentence can be found. There may be several such paths. There may also be paths that result in a machine block.

Transition Functions
Another way of describing an FSM is with a transition function. These are used in language theory, see [1] and [2]: A transition function describes one transition on an input character t, from a state q. Since there may be several target states on any particular character, we need to specify the target states as a set of states, which possibly contains one or more states. One component of a transition function is written: (q, t) = { p1, p2, p3, ... } where q, p1, p2, p3, ... are states t is a character is the transition function. This means that when the FSM is in state q, on the character t, the next state may be one of the members of the set { p1, p2, p3, ... }, a finite set. The complete function in general requires many such specifications, but each one has this form. An FSM requires many such functional descriptions, in general, one for each entry in the transition table. Mathematically, the set of these descriptions constitutes the transition function, in the sense that it maps an arbitrary pair {state, character} into a set of states. You should be able to see that one component is equivalent to one cell in the state transition table.

Production Rules and Grammars


Yet another way of defining a FSM is with a set of production rules. A production rule defines a replacement operation on a string. A set of production rules constitutes a grammar. A production rule consists of a left member and a right member, and is written like this: lm rm

Chapter 2: Regular Expressions and Finite State Machines page 34

where lm and rm are strings of terminals and nonterminals in general. The general idea is that if we have a string containing the string lm, then this rule permits us to replace lm with rm. So the arrow means "may be replaced by". Given a set of production rules, and some initial string, then we can make replacements to obtain new strings. We can also say that lm derives rm, and that rm is derived from lm. A terminal is some character or token that we want to see in a completed, or fully derived, string. A string consisting only of terminals is said to be a sentence. A nonterminal is a name assigned to some set of sentences that we expect to derive from the nonterminal. The purpose of a nonterminal is to provide a replacement opportunity. We do not expect to see any nonterminals in a completed sentence. However, a string possibly containing some nonterminals is called a sentential form. A grammar must also specify a particular initial sentential form. We can then restrict the possible sentential forms in the grammar to those that are derived from the initial form. The initial sentential form will always be a single nonterminal. If we have a grammar with a more general initial form, say , we can always add a new nonterminal S and a production rule
S

and then insist that the initial sentential form is S. So we lose no generality by requiring that the initial sentential form is a particular nonterminal. We can then restrict the sentences derived in the grammar to those sequences that can be derived from the initial nonterminal. Similarly, the sentential forms derived in the grammar are those derivable from the initial nonterminal. We note that the initial nonterminal is itself a sentential form. Also, a sentence derived in the grammar is considered to be a sentential form, for the sake of generality.

Example
Consider this grammar, which we'll call F0:
S S S S 0 S 1 S 0 1

It has one nonterminal, S, and two terminals, 0 and 1. The only possible initial sentential form is S, since that's the only nonterminal. If the initial nonterminal isn't specified, you can safely assume that it's the left member of the first production rule. This grammar can be used to spell out binary numbers, i.e. arbitrary sequences of 0's and 1's. To get a single 0, for example, we start with S, then choose the third rule to derive a 0, like this:
S 0

This is actually a derivation of the string 0, although it looks just like the third production rule. The string 0 is both a sentential form and a sentence. It's a sentence because it contains no nonterminals, and it has been derived from the initial nonterminal. Let's derive some longer sequences. We can derive the sentence 0110 as follows:
S 0 S 0 1 S 0 1 1 S 0 1 1 0

The first step in this derivation comes from the initial nonterminal, S. We choose the first rule, S 0S, and this causes the sentential form 0S to be derived. We've underlined the nonterminal in each sentential form that will become replaced in the next step. In the second derivation step, we replace the underlined S using the second rule, S 1S. This yields the sentential form 01S. Continuing, we choose the second production rule, replacing the S in 01S by 1S to obtain 011S.

Chapter 2: Regular Expressions and Finite State Machines page 35

Finally, we choose the third rule, S 0, replacing S in 011S by 0, to yield the sentence 0110. We can't make any more replacements, since there are no nonterminals in this sentence.

Grammar Classification
You'll notice that grammar F0 is not a completely general grammar, according to our definitions above. The left member lm is always a single nonterminal. In general, it could be some sequence of terminals and nonterminals. The right member rm is either empty, a single terminal, a single nonterminal, or a terminal followed by a nonterminal. In general, the right member could be some sequence of terminals and nonterminals, or even no sequence, i.e. the empty string. We say that F0 is a right-linear grammar. This is a very special form of grammar, and one that exactly matches an FSM. If the left member is always a single nonterminal, while the right member is some arbitrary sequence of terminals and nonterminals, then we say that the grammar is context-free. The idea of context-free is that within any sentential form, you can just choose any nonterminal for replacement by a rule, without regard to its neighbors, i.e. without regard to its context. We'll study context-free grammars later. They turn out to be very useful in describing real languages such as Pascal, C, Ada, etc. A completely general grammar is said to be context-sensitive. Here's a simple context-sensitive grammar:
X 0 X X 0 X 0 X 0 1 0 X 0 1 0 1

And here's a derivation with this grammar. We've underlined the substrings that are about to be replaced in the next step:
X 0 X 0 1 0 X 0 1 1 1 0 X 0 1 1 1 1 0 1 0 1 1

You'll notice that in each step, we look for a match to the left member of some production rule, then replace that matched substring by a new string. We won't use context-sensitive grammars to define any other language. Such languages can only be recognized by a nondeterministic stack machine, which is an inefficient way to parse sentences. We will use context-free grammars extensively, starting with chapter 10.

Right-linear Grammars and FSM


Suppose we have an FSM, described in table form, like the one above. We can easily write a rightlinear grammar that is equivalent to the FSM. We need to first understand what equivalence means. It means that any sentence that the FSM accepts can be derived in the grammar, also that any sentence derived by the grammar can be accepted by the FSM. We won't prove this, but we can make it plausible through some examples. Let's start by describing how to find an equivalent right-linear grammar G for an FSM M. That's done by following these rules: Each FSM state name will be a nonterminal in grammar G. The start state name will be the initial nonterminal of G. For each transition from state A to state B in the FSM on the character c, write the rule
A c B

For each empty transition (i.e. on the empty symbol ) from state A to state B in the FSM, write

Chapter 2: Regular Expressions and Finite State Machines page 36

the rule
A B

For each halt state F in the FSM, write the rule

Example
Here's how we can obtain the right-linear grammar for the FSM given above (figure 1). We'll reason through the first few rules, then just state the remaining ones: The initial nonterminal is S, since S is the start state of the FSM. For the transition S to A on token "+", we write the rule
S + A

For the transition S to A on token "-", we write the rule For the transition S to A on the empty token , we write the rule For the transition A to B on token "d", we write the rule

S - A S A A d B

and so forth. Here are the remaining rules in our grammar:


A A B B C C D D E E G H H d E d F d . d F d . d d F C B C D D E G H H

Since F is a halt state, we add this rule:


F

Proving that this grammar's language is exactly the language of the FSM is beyond the subject matter of this book. However, it should be clear that any particular transition in the FSM is mimicked by a production rule in the grammar. Suppose we have some path in the FSM that describes a sentence in the FSM, then we can also find a sequence of rules that derives that sentence. Similarly, if we have a sentence derived in the grammar, then each of the derivation steps exactly corresponds to a transition in the FSM that yields a path in the FSM accepting the sentence. The reader should be able to test these assertions with a few strings in the language.

Deterministic vs. Nondeterministic FSMs


The FSM we've been looking at (figure 1) is said to be nondeterministic. What makes it nondeterministic is the presence of empty moves and/or moves with more than one transition on the same token. For example, the empty move from state A to state E on the empty token causes the FSM to be nondeterministic. And so do the transitions A to C, and A to B, both of which are on the same character, Chapter 2: Regular Expressions and Finite State Machines page 37

"d". A deterministic FSM has no empty transitions and no multiple transitions. All transitions are on specific characters. No arbitrary choices are provided anywhere in the machine. We clearly want a deterministic FSM as a translator of sentences. Determinism makes it easy to implement the FSM as a program. No matter which state the FSM is currently in, there's never any uncertainty about what to do next. The next character in the string either matches one of the transitions or it doesn't. If it doesn't match, we know immediately that the string contains a syntax error, and the error is on that character. Since there are no empty moves, there's no possibility of dodging a possible syntax error by taking an empty move instead. When the next character does match a transition, we can confidently accept that character, then transfer to another part of the program that corresponds to the next state.

FSM as a Program
Let's see how an FSM can be expressed as a program. We'll just do one little piece of our machine above, so that you can see how it's done. We'll construct our program using the goto statement and labelled statements. Each statement label will correspond to a state in the machine. So for state E in the FSM, we'll write the label label_E in the program. We'll do the transition checking with a switch statement. Each character out of state E will have a case in the switch statement. Following this idea, here's what a little piece of the FSM would look like as a program. This describes state E and its out transitions:
label_E: switch (nextChar()) { case '.': getChar(); goto label_G; break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': getChar(); goto label_E; break; default: syntaxError(); exit(0); }

The function nextChar is supposed to return the first character of the remaining string, without changing anything. The function getChar is supposed to remove the first character of the remaining string, exposing the one that follows it, if any. The function syntaxError is supposed to report a syntax error. To be friendly, it should display the string and point to the offending character that caused the error; this will be the first character of the remaining string at this point. Note that character "d" in practice refers to any of the nine digits. This makes our program accept strings of arbitrary numbers. We've expanded that into a set of 10 case labels as shown, to keep within the spirit of this style of programming.

Why the Program Breaks Down with a Nondeterministic Machine


To see why a multiple transition or an empty transition causes a failure in the program, let's see what the code would look like for the A state:
label_A:

Chapter 2: Regular Expressions and Finite State Machines page 38

switch (nextChar()) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': getChar(); goto label_B; break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': getChar(); goto label_C; break; case Empty: // What should we do here? goto label_E; break; default: // is this really a default case now? syntaxError(); exit(0); }

This clearly won't work. In the first place, we have a switch statement with duplicate case labels, and no C compiler will permit this to be compiled. The duplicate case labels of course arise from the multiple transition on the "d" (digit) token. We also aren't sure what to do about the empty move. The abstract idea is to just transfer to state E, i.e. branch to label_E, without looking at the next token at all. But that doesn't fit our switch statement plan at all, and we are left wondering whether or how a syntax error might be reported. The situation would clearly be much worse if there were several empty moves out of one state, and that situation could happen. We only need one of these ambiguities, and our program plan is ruined. This is clearly unsatisfactory. We need a deterministic FSM. Fortunately, there's a way of converting any nondeterministic FSM into a deterministic FSM. So we need to develop that rather than try to craft a program from a NDFSM.

Why Nondeterministic Machines?


You are probably wondering why we bother with nondeterministic FSMs. Why not just design a language by drawing a deterministic machine in the first place? The answer is that it's much easier to describe a language through production rules or through regular expressions, rather than through writing out a deterministic FSM graph. This provides more flexibility and power in designing particular FSM languages, as we'll discover later. A language can be clearly and concisely described through rules or regular expressions, but the result is a nondeterministic FSM. Designing a language by requiring that the result be a deterministic FSM becomes a very difficult task. It turns out that we can start a language design with a regular expression or right-linear production rules, obtain the corresponding nondeterministic FSM, and then reduce the NDFSM to an equivalent DFSM through a straight-forward algorithm. All of these operations have been built into a generator program, which we call lexgen. We'll see how to use lexgen in the next chapter. For now, let's see how the FSM reduction algorithms work. We'll end the chapter with a discussion of regular expressions and show how to obtain an equivalent NDFSM from a regular expression. This is all leading up to a powerful way of designing and implementing a lexical analyzer.

Chapter 2: Regular Expressions and Finite State Machines page 39

NDFSM Reduction
We want to transform a NDFSM into an equivalent, reduced deterministic FSM. This is done through the following steps: reduce all empty transition cycles reduce all empty transition moves merge all multiple-transition states into new states find and remove all unreachable states reduce the machine by identifying and merging all of its equivalent states

Reduction of Empty Transition Cycles


An empty transition cycle is a sequence of empty transitions that starts and ends on the same state. To find them: Provide a boolean tag (holding true or false) associated with each state. For each state K in the machine, { Set the tags false. Mark K's tag true. Trace through all possible empty transitions from K until you either reach K again or a state marked true. On each transition from some state P to a next state Q (on ), mark Q's tag true. If you reach K a second time, you have an empty transition cycle, and the set of states whose tags are marked true are those in the empty cycle. } To remove them: Let P be one of the states in an empty cycle. Then merge each of the states in the cycle other than P into P, i.e. change all transitions to the other states in the cycle into a transition to P. Remove all the other states in the cycle. If any of the states in the cycle is a halt state, then P becomes a halt state.

Merging Two States


State merging is something we will do many times to reduce a finite-state machine. To merge state B into A, combine all of Bs transitions into state A, like this: for every transition found out of state B, add a similar transition out of state A. If B is a halt state, make A a halt state. Suppose theres a transition on character t from state B to states {S1, S2, S3}, and theres a transition on character t from state A to states {S1, S4, S5, S6}. Then merging state B into A means that state A will have a transition on character t to states {S1, S2, S3, S4, S5, S6}. Note that the resulting state set is the union of the states found under A on t and under B on t. Do this for each character. Here's a formal definition of state merging, using the transition delta notation described earlier, i.e. (n, P) = S, where n is a character or terminal token, P is some state, and S is some set of states. This says that when in state P, on the token n, the next move is to one of the states in the set S. S can be empty, which means that no move on (n, P) is permitted. When a state P is merged into state Q, the result is keeping all of P's transitions, combined with all of Q's transitions. Here's how to express that idea in a formal way:

Chapter 2: Regular Expressions and Finite State Machines page 40

Given state P and state Q, to merge Q's transitions into P: for all {n} such that {n is a token} do { (n, P) = (n, P) (n, Q) } Symbol is the set union operator. This notation is used so often in mathematical logic that each of the operations are given symbol names: for all is given the name . This is an upside-down letter A, which should suggest "All". there exists is given the name , which is a backwards letter E, suggesting "Exists". such that is usually expressed by | , which is not the same as the C operator of the same name. We'll also use this symbol later to represent an "or" condition. is a member of the set is given the name . (We've also used this symbol for the empty string, so its context is important). is replaced by is given the name =. This is essentially the same as the C operator. Most of the above clause can then be compressed into this statement: (n | n T )( (n, P) = (n, P) (n, Q) ) where we understand T to represent the complete set of terminal tokens (or characters). Expanded into English, this asks us to consider all possible tokens in our token set T. For each such token n, we want the transition state set (n, P) to be augmented by the state set (n, Q).

Example
Here's an FSM with an empty cycle. You can see that state A goes to B, which goes to C which goes to A, all on empty moves.
S A B C D (F) + A D D F B C . digit empty A B C A C

B C B

The state sequence A -> B -> C -> A can occur on the empty string. These three states comprise an empty cycle and can therefore be merged. We merge states B and C into A, then remove states B and C, yielding the following table:
S A D (F) + A D B C . D F digit B,C B empty A C

Merging rows A, B, C causes D to appear in the + column, C to appear in the - column, and B,C to appear in the digit column. The empty transitions B, C and A are removed. Rows B and C are removed. We must next redirect all transitions to B or C to state A throughout the table. This is easily done by just renaming B and C as A throughout the table. Note that the composite state B,C under digit becomes A,A, which is just A:

Chapter 2: Regular Expressions and Finite State Machines page 41

S A D (F)

+ A D

A A

. D F

digit A A

empty A A

Removal of Empty Transitions


After all empty cycles have been identified and removed, we need to eliminate all empty transitions. The general plan: Suppose state P maps to Q on an empty transition. Then merge all of Qs transitions into P. If Q is a halt state, make P a halt state. Remove the empty transition to Q. (Do NOT remove state Q) Repeat until there are no more empty transitions in any state.

Example
This the FSM that describes decimal numbers appearing near the beginning of this chapter. It has several transitions on the empty character .
S A B C D E (F) G H + A A . digit B,C B C D E H H empty A E F F

D G

The empty move in row S (to state A) is removed by merging As states into Ss states. (The empty move from S to A is removed). This brings the digit move from A to (B,C) into a digit move from S to (B,C). It also brings in the empty move from A to E into an empty move from S to E:
S A B C D E (F) G H + A A . digit B,C B,C B C D E H H empty E E F F

D G

We need to repeat this operation, this time merging the E row into the S row:
S + A A . G digit B,C,E empty

Chapter 2: Regular Expressions and Finite State Machines page 42

A B C D E (F) G H

D G

B,C B C D E H H

E F F

The empty move in row A (to state E) is removed by merging the E row into the A row:
S A B C D E (F) G H + A A . G G D G digit B,C,E B,C,E B C D E H H empty

F F

The empty move in row B (to state F) is removed by merging Fs states into Bs states. There are no states, but B becomes a halt state since F is a halt state:
S A (B) C D E (F) G H + A A . G G D G digit B,C,E B,C,E B C D E H H empty

Continuing the same way with the empty moves in row D and row H, we end up with this NDFSM, which has no empty moves:
S A (B) C (D) E (F) G (H) + A A . G G D G digit B,C,E B,C,E B C D E H H empty

Removal of Multiple Transitions


A multiple transition is easily recognized in the table form by a set of states, for example, B,C,E in

Chapter 2: Regular Expressions and Finite State Machines page 43

the table above. Heres how to deal with a multiple transition: Create a new state. Call it BCE. Merge all the transitions from states B, C and E into the new state BCE. If any one of states B, C or E is a halt state, make BCE a halt state. Repeat until all the multiple transitions are accounted for by new states.

Example
Start with the previous NDFSM. We create a new state BCE. It receives the merger of the states in B, C and E. Since B is a halt state, BCE must be a halt state:
S A (B) C (D) E (F) G (H) (BCE) + A A . G G D G digit B,C,E B,C,E B C D E H H B,C,E empty

D,G

We now discover we need a new state DG. This receives the merger of states D and G. It is also a halt state since D is a halt state:
S A (B) C (D) E (F) G (H) (BCE) (DG) + A A . G G D G digit B,C,E B,C,E B C D E H H B,C,E D,H empty

D,G

Now we need a new state DH, which is formed the same way:
S A (B) C (D) E (F) G (H) (BCE) (DG) + A A . G G D G digit B,C,E B,C,E B C D E H H B,C,E D,H empty

D,G

Chapter 2: Regular Expressions and Finite State Machines page 44

(DH)

D,H

At this point, every state mentioned inside the table is listed in the leftmost column. We can stop. This is a deterministic FSA.

Removing Unreachable States


There may be states that cannot be referenced through transitions from the start state. You can find these by marking the start state, then every state that the start state accesses. Continue by marking every state that any marked state accesses. When you are done, some states will be unmarked. They cannot be reached by any state sequence from the start state. So they can be removed.

Example
Start by marking the S state. Then mark each of the states mentioned in the S row, i.e. A, G, and BCE, like this:
S A (B) C (D) E (F) G (H) (BCE) (DG) (DH) + A A . G G D G digit B,C,E B,C,E B C D E H H B,C,E D,H D,H mark

D,G

Since S, A, G and BCE can be reached, the states mentioned in their rows can be reached. So we mark them. Continue until no more states can be so marked. We end up with this marking:

Chapter 2: Regular Expressions and Finite State Machines page 45

S A (B) C (D) E (F) G (H) (BCE) (DG) (DH)

+ A

. G G D G

digit B,C,E B,C,E B C D E H H B,C,E D,H D,H

mark

D,G

States B, C, D, E and F are unreachable. They can be deleted. We end up with 7 states. This may seem peculiar. They played a role in the original NDFSM. Why are they now unreachable? The answer is that each of these has found its way into a new state. For example, Cs transitions are now part of BCE. State F had no transitions, but its property as a halt state has converted several other states into halt states.

State Equivalences
It's often possible to further reduce the number of states in an FSM. Indeed, we can find an FSM with a minimal number of states. Further reduction of an FSM may seem pointless when the machine is as small as our example, but it becomes important when the FSM contains hundreds or thousands of states. Having those extra states means a certain loss in performance through larger memory or more circuitry. Two states P and Q are said to be equivalent if for every string S that is accepted starting at state P, the string can be accepted starting at state Q, and vice versa: all strings accepted from state Q are also accepted from state P. When two states can be shown to be equivalent, they can be merged, thereby reducing the FSM. Merging two equivalent states P and Q amounts to changing every Q to P throughout the machine. You should then find two or more identical state rows, one of which can be deleted.

Distinguishable States
Two states P and Q are said to be distinguishable if there is some string S that is accepted from P but not Q, or vice versa. If we can show that two states are distinguishable, then we know that they cannot be merged. By distinguishing pairs of states, it becomes easier to test the remaining states for distinguishability or equivalence.

Testing for Distinguishability--the Distinguishability Rule


We test states in a pairwise fashion for distinguishability by looking for the following. Assume the states are P and Q: Is one of the states {P, Q} a halt state, while the other is not? If so, they are distinguishable.

Chapter 2: Regular Expressions and Finite State Machines page 46

Is there a transition on some token T from state P, but not from Q? If so, P and Q are distinguishable. Is there a transition on some token T from state Q, but not from P? If so, P and Q are distinguishable. Is there a transition on some token T from state P to state R, on state Q to state S, where R and S are distinguishable? If so, then P and Q are distinguishable.

Example 1
Look at the FSM developed in the previous section. States S and B cannot be equivalent. B is a halt state, but S is not. The halt state follows from the fact that the empty string can be derived from a halt state, while it cannot be from a non-halt state.

Example 2
This also refers to the FSM given above. Are the states S and A equivalent? They are both non-halt states. S goes to G on .. So does A. S goes to BCE on digit. So does A. However, S goes to A on + and -, but A does not. So S and A cannot be equivalent. (There's at least one string, starting with "+" that is accepted from S, but not from A.) We conclude that S and A are distinguishable.

Example 3
States DG and DH are clearly equivalent, since they have a digit transition to the same state, and there are no other transitions.

Example 4
Suppose we had these two rows in our reduced FSM:
(DG) (DH) + . digit D,H D,G mark

Are states DG and DH equivalent? It's easy to show that they are. Any string accepted by one is accepted by the other. Our distinguishability test also fails to distinguish them, unless we consider DG and DH to be distinguishable; then they have a common transition to distinguishable states. However, states DG and DH are clearly equivalent and they can be merged into a single state, yielding a smaller FSM that accepts exactly the same sentences as the larger FSM. (Both machines accepts a sequence of 0 or more digit characters). In the interests of reducing a machine to the least number of states, we need to add a third rule that will serve to reduce any FSM to minimal states: Assume that all the states of an FSM are equivalent. Then partition them only as required through the distinguishability rule. The general idea is to place all the states in a single set, which we'll call a state group. We then look for evidence that each group containing two or more states can be split into two groups through

Chapter 2: Regular Expressions and Finite State Machines page 47

distinguishing characteristics. If we're unable to do this, then each of the remaining groups are considered to be equivalent. In our example, DG and DH are placed in a single state group; call it group G1. Both DG and DH are halt states, so they can't be distinguished by that. They also have transitions on digit to the same group G1, so they can't be distinguished that way. We conclude that DG and DH are equivalent and can be merged. We next formulate these ideas into an algorithm.

Identifying and Merging Equivalent States


If two states A and B are equivalent, then they can be merged into a single state. We merge states A and B by renaming state B as state A everywhere in the FSM. This will result in two identical rows in the state table; one of these can be removed. To identify the equivalent states, follow this algorithm: 1. partition (i.e. subdivide) the states into halt and non-halt states. A partition of a set S is a set of disjoint subsets of S such that every member of S is in exactly one subset. Partitioning a set is like dividing it up like a pie. No state is in more than one piece, and every state is in exactly one piece. 2. Call each subset of the partition a group. 3. Consider any transition from a state A to some other state B as a transition from state A to B's group. We can then apply the distinguishability rule to this modified FSM, in which transitions between states are replaced by transitions from a state to a state group. 4. If the subgroups G1, G2, ... of some group G are distinguishable, then they belong in different partitions. But note that if states A, B and C belong to some group, and we find that A and B are distinguishable, we also need to test A:C and B:C for distinguishability. 5. Repeat this process until no distinguishable subgroups can be found. The remaining groups contain equivalent states, and comprise a minimized DFA.

The Initial Partitions


The initial partition of an FSM can be formed by first partitioning the halt and non-halt states into two groups G1 and G2. Then within each of these groups, we look for pairs of states A and B and a character T such that A has a transition on T to some other state, and B doesn't, or vice versa. This will usually refine the partition considerably, sometimes to the extent that every group contains one state. However, in general, some groups will have two or more states. We next address these.

Finding the Distinguishable Subgroups


Consider some group G containing states S1, S2, S3, ... Sn. 1. We examine the states in a pairwise fashion, i.e. (S1, S2), (S1, S3), ... (S1, Sn). 2. Decide whether a state-pair is distinguishable or not. If not, keep the pair in the same sub-partition of G. Otherwise, put them in separate partitions. 3. Our goal is to partition G according to the distinguishability of their states, but only just. The algorithm must not create a new partition for state A just because it is distinguishable from state B. As partition must contain all the states that are distinguishable from B, but none that are indistinguishable. This requires examining all the possible pairs of states within each group.

Chapter 2: Regular Expressions and Finite State Machines page 48

Example 5
Let's work through an example. Consider the following DFSM. It has 7 states, one halt state and transitions on the characters {a, b, c}:
A B C (D) E F G a B C E C A A C b E D B B D F c A A A A C

A
halt

A C

non-halt

D
F B E G

C F B E G

Fig. 2

Fig. 3

It isn't obvious which of the states {A, B, C, E, F, G} are equivalent. However, it's clear that A can't be equivalent to B, since B has a move on character c while A does not. Similarly, F can't be equivalent to any of the other states, since it has no move on character b while the others do.

We can therefore start with the state groups shown in figure 2. Note that states A and C comprise one group, since they have moves (or not) on the corresponding characters, and are both non-halt states. Similarly, B, E, and G belong in a group of their own. F is in a group of its own. The dotted curve encloses all the non-halt states. The only halt state is D.

Distinguishing more subgroups


We now look at transitions from individual states (i.e. A or C) to groups, in an attempt to distinguish states within each of the groups. Let's start with states A and C. From the table, A goes to group BEG on {a, b}. C also goes to group BEG on {a,b}. We therefore can't distinguish A and C (yet). If we later find that B, E or G can be distinguished, we need to return to this experiment. Next look at B and E. From the table, B goes to group AC on {a,c}. So does E. B goes to D on {b}, and so does E. Therefore, B and E can't be distinguished (yet). Look at E and G. E goes to group AC on {a, c}, and so does G. E goes to D on b, but G goes to F on b. Therefore, E and G are distinguishable. We also find that B and G are distinguishable, for the same reasons. Thus the group BEG can be partitioned into two groups, one containing BE, and the other containing G. This is shown in figure 3. When any group becomes partitioned, we need to reexamine all the transitions, to see if one or more other groups can be partitioned.
Chapter 2: Regular Expressions and Finite State Machines page 49

So we need to examine states A and C again. From the table, both A and C go to group BE on {a, b}, so we still can't distinguish A and C. States B and E both go to group AC on {a, c}. States B and E both go to D on {b}. We can't distinguish B and E. We are unable to distinguish any of the states in this round, and therefore conclude that state A is equivalent to C, and that state B is equivalent to E. Given these equivalences, we may next replace C by A everywhere in the table, and also E by B everywhere. There will be two identical rows marked A, so we just cross out one of them (see below). Similarly, one of the two identical rows marked B can be removed. The resulting machine has 5 states.

Another Example
Consider the table below, which represents a DFSM. Reduce this machine by finding and merging its equivalent states. X Y A B S B D A B C B C B C A E (D) C C (E) G F (F) F G (G) This requires several passes. For pass 1, we have two groups G0= {S, A, B, C}, and G1= {D, E, F, G}. These partition the state set into the non-halt and halt states. Since the transition patterns are the same throughout, we can't further partition the sets. Now compare the states pairwise according to their transitions. In the table below, for example, we compare S with A, and note that in state X, S goes to G0, and A goes to G0. Also, in state Y, S goes to G0, but A goes to G1. We therefore conclude that S A. So S and A go into separate partitions on the next pass, but let's also examine the pairs (A, B), (S, B), (S, C), (B, C) and (A, C), shown in the table below. When we discover that two states are distinguishable, we know they belong in different partitions. When they are apparently equivalent, they belong in the same partition on the next pass, but we may discover later that they are in fact distinguishable. That's why when we find that two states appear to be equivalent, we defer judgement by noting that they are "equivalent for now". The subtle point here is that although we can often quickly prove that two states are distinguishable, we cannot so quickly prove that two states are equivalent. We can only keep seemingly equivalent states in the same partition until we later show them to be distinguishable, or have exhausted all the possibilities of showing that they are distinguishable.

S A

x G0 G0

y G0 G1

Conclusion SA

Chapter 2: Regular Expressions and Finite State Machines page 50

A B S B S C B C A C

G0 G0 G0 G0 G0 G0 G0 G0 G0 G0

G1 G0 G0 G0 G0 G0 G0 G0 G1 G0

AB S = B for now S = C for now B = C for now AC

For the next pass, we consider the partitions G0= {A}, G1= {D, E, F, G}, and G2= {S, B, C}. We can tackle any of these groups, but let's see if G1 can be broken down by examining the pairs DE, DF, DG: X Y conclusion G0 G1 D G2 G2 E DE G0 G1 D G1 G1 F DF G0 G1 D G1 G1 G DG Clearly, D belongs in its own partition. For the next set of state-groups, we let G0= {A}, G1= {S, B, C}, G2= {D}, and G3= {E, F, G}. Let's examine pairs in group G1: X Y conclusion G0 G1 S G1 G2 B SB G0 G1 S G1 G1 C SC So state S is distinguishable from the others. Let the next state groups be G0= {A}, G1= {S}, G2= {B, C}, G3= {D}, and G4= {E, F, G}. Let's look at E and F: X Y conclusion G2 G2 E G4 G4 F EF G2 G2 E G4 G4 G EG G4 G4 F G4 G4 F=G G The next (and final) state group is G0= {A}, G1= {S}, G2= {B, C}, G3= {D}, G4= {E}, G5= {F, G}. The following pairs table proves that B = C and F = G. We can't distinguish these states or break apart any of the groups, so this is the minimal machine, with 6 states. States B, C can be merged. Also, states F, G can be merged.

Chapter 2: Regular Expressions and Finite State Machines page 51

B C F G

X G2 G2 G5 G5

Y G2 G2 G5 G5

conclusion B=C F=G

Finding the Graph of a Regular Expression


Lets now return to regular expressions. We can now show that

Given an arbitrary regular expression R, a non-deterministic finite-state machine can be constructed that exactly accepts the strings accepted by R.

This is called Thompsons construction. Start with a single start state S and a halt state F. Connect the two with the regular expression R, like this:
S R F

The abstract idea here is that whatever string R can expand into will also be one that can be stepped through in the FSM from the start state to the one halt state. We now break down R by looking at its outermost structure one step at a time. By the "outermost" structure, we mean the operators with the lowest precedence. We'll provide some examples later to make this clear. At each step, you will have some regular expression R connecting two arbitrary states X and Y. X may be state S or some other state. Y may be the halt state F or some other state. Finally, X and Y may be the same state. If R has the form of a character, (whether empty or not) no further expansion of this move is necessary. The machine will clearly accept the single character if it's on the transition path between X and Y. Or, for that matter, it will accept that part of a regular expression consisting of one character by having it be on the transition between two states. If R has the form of an alternation, split the alternation into two separate transitions, like the following. This clearly goes to the heart of what an alternation means. If the FSM is in state X, then it can choose to get to Y by either accepting the expression A or the expression B.

Chapter 2: Regular Expressions and Finite State Machines page 52

A|B

expands to this:

X
B

If R has the form of a closure, introduce a new state and form a loop in the FSM like the following. The new state should be given a name that is different from all the other state names. Notice that we introduce an empty transition from X to the new state Z, and from Z to Y. The regular expression A appears in a transition from Z to itself. Recall that X and Y might be the same state. All that would do is fold the diagram around. The state transitions are still essentially as shown below. To see why this produces a machine new state equivalent to the closure operation, recall A* expands what closure means: zero or more of X Y X Z Y to this: expression A. In the machine, to get zero A of A, we follow the empty path from X to Z, and then to Y. To get one of A, follow the empty path from X to Z, then along A to Z, then empty to Y. It's clear that you can get N iterations of A by tracing the loop from Z to itself N times before taking the empty path to Y. If R has the form of a concatenation, introduce a new state in the FSM like the following. As with the closure, we give it a name that isn't already used for any other state. Also note that X and Y may be the same state, or X might be the start state or Y might be a halt state. In any case, the resulting machine clearly represents a concatenation of A and B.
new state

AB

expands to this:

All the other operators can be converted into a machine segment in the same way, by using their equivalences. For example, A+ is equivalent to AA*, which is the concatenation of A with A*. Its machine therefore looks like this:
X A Z

Strictly speaking, we should have introduced two states, with a transition on A to state W, then W on empty to Z, Z to itself on A, and Z on empty to Y. However, this machine is equivalent. The [ ] operator is of course equivalent to (x|y|z|...), and its construction is just like the alternation given above, except with many transitions from X to Y, one on each of the characters in the set.

Chapter 2: Regular Expressions and Finite State Machines page 53

Example 1
a(a|b)+b?b(a|b)(a|b)

This is a concatenation of several elements, as follows:


a ( a | b )+ b? b ( a | b ) ( a | b )

a A b B

b C

a b D b E

a b F

The resulting NDFSM is given above. The first component, a, is clearly formed by states S and A. The second component (a|b)+ is formed by states A and B. The third component b? is formed by states B and C. The last two components are formed by states D, E and F.

Example 2
(ab|ba)(ab)*ab

This is a concatenation of these components:


( a b | b a ) ( a b )* a b

A b C

E a G
a b

S
b

D F

The resulting NDFSM is shown in figure 5. Fig. 5 The halt state is H. The first component (ab|ba) is formed in the states S, A, B, C. The second component (ab)* is formed in the states C, D, E, F. The last component ab is formed in states E, G, H. Notice that the closure applies to the concatenation of a and b, not to each of these separately.

Summary
We've seen how each of these language representations are equivalent (though we haven't proved this formally): Regular expressions Finite state machine expressed as a state graph Finite state machine expressed as a table Finite state machine expressed as a transition function Right-linear production rules We've seen how an FSM may be nondeterministic, through the presence of empty transitions and multiple transitions. We've seen that a nondeterministic FSM is almost impossible to program, but a DFSM is easy to program. We've discovered that an FSM can be considered a syntax checker in that it accepts certain sentences in its language, and rejects others. It is therefore a form of language translator or compiler in its own right. We've seen how to reduce a nondeterministic FSM to an equivalent deterministic FSM, with a minimum number of states.

Chapter 2: Regular Expressions and Finite State Machines page 54

We've seen how to express regular expressions in a computer-style language using ordinary ASCII characters, and how a regular expression can be used to generate a NDFSM. In general, this FSM theory can be used to design simple languages. An FSM isn't powerful enough to describe a complete modern programming languages, but is commonly used to recognize tokens in most programming languages. We will use this idea in the next chapter by defining the tokens in the language as regular expressions, expanding them into a NDFSM, then reducing that a minimal deterministic machine. We will use the halt states of the resulting DFSM to identify particular tokens. An FSM also appears in electronic designs involving synchronous logic state machines. Finding the minimal number of states in the abstract FSM of such a system is clearly important in reducing the number of components needed to support the system, and in avoiding redundant states.

Bibliographic Notes
The earliest work on FSM is in McCullough [3]. Kleene [4, 5] first introduced the notion of regular expressions. Algorithms for the conversion of state transition functions and regular expressions are found in McNaughton and Yamada [6]. An early review paper is Brzozowski [7]. The material on state equivalence is due to Huffman [8], Moore [9], Mealy [10], and Aufenkamp and Hohn [11], as found in Gill [12]. The literature on the relation of FSMs to logic circuitry is extensive. See Gill [12] for a bibliography. Lexical analysis and its use of FSM technology has been discussed by many authors, e.g. Johnson [13], Conway [14], DeRemer [15], Gries [16] and Feldman [17], as well as Aho and Ullman [2].

References
[1] J.E. Hopcroft, J. D. Ullman, Formal Languages and their Relation to Automata, Addison-Wesley, 1972. Also see [2] [2] A. V. Aho, J. D. Ullman, The Theory of Parsing, Translation and Compiling, two volumes, Prentice-Hall, 1972. [3] McCullough and Pitts, A Logical Calculus of the Ideas Immanent in Nervous Activity, Bull. of Math. Biophysics 5 115/133, 1943. [4] S. C. Kleene, Introduction to Metamathematics, Van Nostrand Reinhold, New York, 1952 [5] S. C. Kleene, Representations of Events in Nerve Nets, in Automaton Studies, Princeton University Press, Princeton, NJ, 1956 [6] R. McNaughton and H. Yamada, Regular Expressions and State Graphs for Automata, IRE Trans. on Elect. Computers 9(1) 39/47, 1960. [7] J. A. Brzozowski, A Survey of Regular Expressions and Their Applications, IRE Trans. on Electronic Computers 11(3) 324/335, 1962. [8] D. A. Huffman, The Synthesis of Sequential Switching Circuits, J. Franklin Inst 257(3) 161/190; 275/303, 1954 [9] E. F. Moore, Gedanken-Experiments on Sequential Machines, in Automata Studies, Princeton, NJ, pp 129/153, 1956 [10] G. H. Mealy, Method for Synthesizing Sequential Circuits, Bell System Tech J. 34(5), 1045/1079, 1955. [11] D. D. Aufenkamp, F. E. Hohn, Analysis of Sequential Machines, IRE Trans. Vol EC-6, 276/285, 1957 [12] A. Gill, Introduction to the Theory of Finite State Machines, McGraw-Hill, New York, 1962 [13] W. L. Johnson, et al, Generation of Efficient Lexical Processors Using Finite State Automatic Techniques, CACM 11(12) 805/813, 1968

Chapter 2: Regular Expressions and Finite State Machines page 55

[14] M. E. Conway, Design of a Separable Transition-Diagram Compiler, CACM 6(7), 396/408, 1963 [15] F. L. DeRemer, Lexical Analysis, in Lecture Notes in Computer Science, Springer-Verlag, New York, 109/120, 1974 [16] D. Gries, Compiler Construction for Digital Computers, Wiley, New York, 1971 [17] J. Feldman and D. Gries, Translator Writing Systems, CACM 11(2) 77/113, 1968.

Chapter 2: Regular Expressions and Finite State Machines page 56

Chapter 3: Lexgen: A Lexical Analyzer Generator


W. A. Barrett, San Jose State University nch3.doc, vs 2.1

Introduction
A compiler or translator is organized as a series of filters that read a source file and yield an object file or symbolic assembler file. These are called the preprocessor, the scanner, the parser, and the code generator source file respectively. See figure 1. The four filters may be separate tasks, linked together by pipes, or preprocessor separate phases in a common program. Usually the preprocessor (if there is one) is a separate process or scanner program. It accepts a text file and delivers a processed text file to the scanner. The output text file may be larger or smaller than the input file. The scanner phase accepts a source file and delivers a stream of tokens parser to the parser phase. The scanner is also called a lexical analyzer. A token is a group of one or more characters in sequence. A scanner is intended as code generator an interface between the input stream and a parser. Its purpose is to filter the input character stream and generate a stream of tokens, skipping white space, and passing the tokens on to the parser. target file The parser phase will generate special data structures representing Fig. 1. A compiler clauses, or collections of tokens, of the source language. These structures are acted upon by the code generator to yield a target file. The target file may be symbolic assembly code for some CPU, or some other low-level language. This chapter is primarily concerned with the scanner phase. We'll show how to apply finite-state machine theory to construct a scanner, given an arbitrary language definition. Our development is similar to, but not compatible with, the popular Unix/Linux tool lex [1].

Organization of Text Files


At some low level, a text file must be read by the scanner on a character-by-character basis. Text files are commonly prepared with a technical editor, and consist of ASCII printable characters (space, ASCII 0x20 through tilde, '~', ASCII 0x7E), tabs, and line endings only. (See the ASCII chart in chapter 1). A line consists of a sequence of printable characters terminated by a line ending. A line ending has different forms, depending on the operating system. In most Unix systems, a text file line ending is a single line feed character (ASCII 0xA, or \n in C). Under MSDOS, Microsoft Windows and OS/2, a text file line ending is a carriage-return line-feed pair. (The carriage-return character is ASCII 0xD, or \r in C). Reading a line from a text file is facilitated by several C/C++ library functions. We will use the C function fgets to fetch one line. This function expects a pointer to a string buffer, a maximum length, and a pointer to a FILE object. The text file should be opened through the C function fopen. This function expects a pointer to a filename, and a mode option. If you use option "r", the text file will be opened for read-only access, and each line fetched by fgets will be terminated by one line feed character, Chapter 3: Lexgen: A Lexical Analyzer Generator , page 57

followed by a null character (ASCII 0). If the line is longer than the specified maximum length, the line feed character will be absent, but the null character will always be there. The remainder of the line will be fetched on the next fgets call. (See any C reference manual or use the Unix man utility for details about these functions). A nice property of functions fgets and fopen is that they work exactly the same way in Unix and MSDOS. It is also supported in this standard way in most Windows/MSDOS compilers and most Unix compilers. Some technical editors allow any ASCII character, including non-printable characters, to be inserted in a text file, but this practice will cause complaints in our lexical analyzer. Don't do it. The program source code should not contain any ASCII characters other than \n, \t and space through tilde ('~', ASCII 0x7E). Only after the program is compiled, linked and executed may other ASCII characters appear in certain char variables or arrays.

Preprocessors
A preprocessor is a translator program that accepts some language's source file and generates a modified source file intended for a scanner/parser translator. A familiar preprocessor is the C/C++ preprocessor, program cpp in Unix. (A similar preprocessor is available under Linux through the Gnu compiler gcc. Use info or man for details) . cpp provides several services, as follows: Macros can be defined and used later in the source program, through #define Sections of code can be included or skipped through conditional macro commands, using #if and #ifdef. These must be terminated by a matching #endif, and can be nested. Errors can be induced in the preprocessor in case certain macro conditions are not met. Lines ending in a backslash are concatenated with the next line. Quoted strings may be concatenated by the preprocessor into single strings. Comments may be removed. External files may be included, through #include. The resulting output file will of course be stripped of various source material. It may be expanded by including external files or expanding macros, or collapsed through conditionals. Since most compilers need to refer to the original source lines in order to generate meaningful error messages, the preprocessor must attach line number information to the generated file in a form that the compiler will accept. Otherwise, lines in the expanded file will not correspond to lines in the original source file. A C preprocessor cannot be constructed from a pure FSM. It requires symbol table services (to support the #define macros) and a pushdown stack (to support nested #if ... #endif structures). It's normally written as a standalone program that accepts a source file as stdin, and generates an expanded processed text file as stdout. A C preprocessor is a fairly easy program to write. It essentially copies most of the characters in the file from the input to the output, except when the first character in a line is "#". When the preprocessor detects that first character, it springs into life. A command word follows that "#". Here are some of the common command actions: #define name string. The name is entered into a symbol table and associated with string. Whenever name appears later in the source file (or in another #define string), it's looked up in the preprocessor symbol table and replaced with its associated string. #if expression. If the expression evaluates to "true", the following source lines are copied. Otherwise, they are skipped. A variation is: Chapter 3: Lexgen: A Lexical Analyzer Generator , page 58

#ifdef name, which expects a preprocessor name. Here, the copying or skipping is controlled by whether name appears in the preprocessor symbol table #endif. This terminates the range of an #if or #ifdef #include name. The file name, which should be a text file, is copied in place of this line. The file may contain other preprocessor directives, such as #include, causing other files to be copied in. In addition, the preprocessor finds every identifier in the source file and checks to see if its a define requiring expansion. The expansion is done without regard to the context of the define name, a fact that sometimes causes great confusion to the programmer. Shell variable names must be noticed by the preprocessor. For example, in a Unix Bourne shell, one can write
myvar=myname

Then a C preprocessor will consider name myvar to be defined as the string myname, just as though it appeared in a define like this:
#define myvar myname

We'll assume from here on that any program source file has been filtered by a preprocessor before the scanner operates on it.

Tokens
After preprocessing, and file reading, a program consists of a long stream of characters in the form of a file. You might consider the characters in the file as the atoms of the program. Then certain clumps of atoms form tokens, which might be considered the molecules of the program. In building a language parser, it's best to first clump the characters together into tokens, then have a higher-level algorithm, called a parser, work on the tokens. In the process, we can skip over comments, line endings and other irrelevant whitespace, so that the parser can concentrate on the intelligence of the program, which is in the stream of tokens. Some examples of tokens, drawn from the C language, are as follows:
+ += = == * << != if for 127 3.6E-3 x_44abc "a string" 'c' value)

a reserved word a reserved word a fixed-point number, also called an integer a floating-point number, also called a float or a double an identifier a quoted string, which becomes an array of char a character, which may also be considered an integer (its ASCII

A token can be classified as follows: a literal token, which stands exactly for itself, for example: + += = == * << != if for a lexical token, which stands for some set of character sequences, for example, the numbers, identifiers and strings.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 59

Token Codes
Each token is assigned a token code, which is a unique integer by which the token may be identified. The parser is primarily interested only in the sequence of token codes passed to it by the scanner. A literal token is fully described by its token code. A lexical token requires some additional information to fully describe it. For example, the entire class of identifiers will be represented by a single token code, but the specific characters comprising an identifier will have to be carried in an auxiliary data structure. A string is another form of lexical token. All the strings in a language can be assigned one token code, but obviously each one has a different number and arrangement of characters in it. The parser usually doesn't care about what exactly is in any particular string, but it does care about distinguishing a string from a user identifier, and those from numbers. Literal tokens include the reserved words. A reserved word resembles an identifier--both start with a letter and may continue with a sequence of letters. However, there are only a small number of reserved words defined in any particular language. These play a pivotal role in the parser, so we assign a unique token code to each reserved word. The scanner must determine whether a given sequence of letters is an identifier or a reserved word. In some languages, for example, Fortran and PL/I, reserved words can also be used as identifiers, with some restrictions. This policy greatly complicates a translator, since the parser must tell the scanner when to look for a reserved word and when to expect an identifier. The token codes clearly comprise a finite set, usually less than 100, although the source string forms can take on a huge number of possible forms. For example, only one token code is assigned to all the identifiers in C, although ANSI C permits up to 31 characters in any one identifier. How many possible C identifiers are there? A single-character identifier must be a letter, which has 52 possible combinations. For two-character identifiers, we can have 52*63 combinations (a letter followed by a letter/digit/underbar). For three-character identifiers, we can have 52*63*63 combinations, and so forth. The total number of possible C identifiers is approximately 10^54, i.e. 1 followed by 54 zeroes. Numbers also have a huge number of source string forms. A double-precision floating point number has approximately 2^64 possible variations since the number contains 64 bits, and nearly all the possible bit combinations are legal. A number has an infinite number of string forms. Even a number as simple as unity can have an infinite number of string forms, for example:
1 1.0 1.00 0.01E2 1E0 10E-1 etc.

We therefore assign only one or two token codes to all numbers. Floating-point and fixed-point numbers will be distinguished in what follows by separate token codes, since the two have very different internal representation forms and different types. However, the specific value of a particular number token will be carried as a binary integer or floating-point number. We'll use a long integer to carry all integers (and characters) internally, and a double to carry all floating-point numbers. Some systems support higher precision forms of integers and floats, and it would be wise to choose the highest possible precision for these tokens. All literal strings are also assigned a single token code. A literal string can potentially be very long; ANSI C sets no maximum length for a string. An ANSI scanner must therefore be prepared to accept extremely long strings, perhaps several thousand characters in length.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 60

Long strings can be represented in short source code lines by using the C concatenation features: A source line terminated with a backslash (\) is considered to be concatenated with the next line, but with the backslash removed. For example:
char str[]= "This string is car\ ried over two lines";

Two or more C strings written next to each other are to be concatenated. For example:
char str[]= "This string is " "concatenated with this" " string";

Whitespace and Tokens


Most programming languages permit comments, line endings, spaces and tabs to be included in the source file. In general, these are supposed to be ignored by the scanner, although some languages make use of a line ending token as part of the language's syntax. Ignoring spaces and line endings results in what is commonly called a free-form language--it doesn't matter how the various tokens are divided up among the source file lines. Only their order is important. For example, here are two source program fragments that contain exactly the same token sequence:
if(a==b)x=z;

if (a == b) /* this program fragment extends over three lines */ x = z;

The token sequence in each of these is as follows:


if ( a == b ) x = z ;

The space character (ASCII 32) and the tab character (ASCII 9) are sometimes required to separate two tokens. For example, in Pascal, we can write
if a=b then ...

We clearly need a space or tab after the "if" and also before the "then" in order for the scanner to separate these tokens. Spaces and tabs can be used freely to indent lines and provide a pleasing appearance to the program.

Comments
Comments are used for code documentation. A comment may open with one special character form, such as "/*" in C, and close with another one, i.e. "*/". Pascal comments open and close with a left brace "{" and a right brace "}", respectively, or with "(*" and "*)", respectively. Such comments may contain any number of ASCII characters, including line endings, spaces, tabs, etc. The only characters

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 61

not permitted are those that signal the end of the comment. Some compilers permit nested comments. These allow a comment form to appear inside another comment form, for example, like this in Pascal form:
{ This comment contains { this comment }, because the braces are nested }

Standard Pascal does not permit nested parentheses. Neither does ANSI C, although the /* */ comment form would seem to permit nesting. Another style of comment opens with some character or character-pair, and ends on that line. For example, ANSI C++ supports a comment that opens with "//", which ends at the end of the line. Any ASCII characters may follow such a comment, and it may be contained within a /* */ comment. The MASM assembler supports a comment that opens with a semicolon ";" and continues to the end of that line.

Tokens and Finite State Machines


It happens that a finite state machine can be designed to scan the tokens and whitespace of nearly every modern language. Recall that an FSM consists of a fixed and finite set of states and has no other form of memory, such as a stack or a number register. Certain languages, such as Fortran, have token rules that make it impossible to disambiguate tokens without some parsing operations running in parallel. However, C, C++, Pascal, Ada, PL/I and many other languages are such that the tokens can be recognized with little or no assistance from the parsing phase in a compiler. They can be disambiguated by an FSM. Another important exception occurs if the language permits a token or whitespace unit to be nested within another token or unit. An FSM cannot then be used as a scanner. A nested comment scanner requires a pushdown stack or a number register to keep track of the depth of nesting. For example, if it were the case that Pascal or C nested comments were to be supported, then we could not use a FSM to scan comments. This exception is easily worked around in practice by providing an escape hatch mechanism for the FSM, so that when a nested structure must be scanned, it can be done through a more conventional program with a stack and/or counters.

Tokens as Regular Expressions


A deterministic FSM with a minimum number of states can be constructed from a regular expression, as we've seen in the previous chapter. We can also describe each of the tokens in a language (usually) by a regular expression. For example, a decimal number token can be described by the regular expression
[0-9]+

where [0-9] stands for any one decimal digit character, i.e. one of the characters '0', '1', ... '8', '9'. The postfix operator + means "one or more of the preceding". We can describe an identifier by the regular expression
[a-zA-Z][a-zA-Z0-9_]*

Operator * means "zero or more of the preceding". This regular expression therefore says that an identifier must start with a letter and continue with zero or more letters, digits and underbars. This regular expression permits any number of characters in an identifier. If the language imposes a limit on the number of characters, that fact must be checked by an auxiliary program, not the FSM. Recall that a literal token is one or more characters that stands for itself. We can consider each literal token as regular expressions, provided that the metasymbols "(", ")", "+", "*", "|", etc. are properly Chapter 3: Lexgen: A Lexical Analyzer Generator , page 62

escaped or quoted. Here are some examples of literal tokens with the appropriate escaping of metasymbols:
:= ; the Pascal assignment operator == ; the C comparison operator <> ; the Pascal "not-equal" operator = ; equal sign if ; a reserved word while ; another reserved word \+ ; character + \\ ; character \ \( ; character (

We will later introduce a shortcut way of expressing literal tokens to avoid having to use the escape character "\".

White Space
White space can be used freely in most modern languages to separate tokens, include comments, and to generally improve the appearance of the source code. White space in C and Pascal include the space character (ASCII 0x20), the tab character (ASCII 7), line endings (usually a line feed, ASCII 0xA, or a carriage return, ASCII 0xD, or both), and comments. We've noted that comments in C and Pascal come in several different forms. It turns out that an FSM for whitespace can also be designed, as we'll see, provided that comments are not nested. Here are some of the white space characters expressed in C/Unix notation, which we'll use for our regular expressions:
\n \r \t [ ] ; ; ; ; a a a a line ending character line return character tab character space character. The brackets [] are used to escape a space

A regular expression for a comment is rather complicated, and will be discussed later.

A Scanner Plan
A scanner can be constructed from a set of regular expressions by constructing a special kind of FSM from the set. Suppose we have the regular expressions r1, r2, r3, rn, which represent the tokens and whitespace of some language. We then construct a reduced FSM M for the regular expression
r1 | r2 | r3 | ... | rn

This essentially says that a valid token is represented by one or the other of the regular expressions r1, r2, r3, rn .

But -- We Need More Than One HALT State


A problem with this is that we will end up with a machine that has only one halt state for the regular expression. We instead want our scanner to work through some input string, find a token, then halt in a particular state that indicates which token has been recognized. We therefore need multiple halt states, one for each token.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 63

Multiple Halt State Rule: Assign a separate HALT state to each token regular expression.

M will therefore have one start state and a set of halt states. We assign one halt state to each of the separate regular expressions r1, r2, r3, ..., so that we have n halt states, one for each token. The transitions in M will be on characters in the input stream. All characters are run through machine M, including spaces, tabs, line endings, etc. We want M to accept one token from an input stream of characters and reach a halt state, announcing that it has recognized one token. The particular halt state that it has reached will indicate which token it has recognized.

Example
S i + 1 A B C f D
See the example machine in figure 2. S is the start state. States D, B and C are halt states. The tokens recognized by this machine are:
if + 1

token "if"

token "+" token "1" Fig. 2

Notice that by assigning a separate halt state to each token, we will know which token has been recognized from the state in which we terminate. Thus if the machine terminates in state D, we know that token if has been recognized. If it halts in state B, token + is recognized. If it halts in state C, token "1" is recognized.

Terminate?
But what does terminate mean? We intend to use this machine over and over to find the tokens in a long sequence of tokens. It's supposed to reach a halt state at the end of each token, not necessarily at the end of the long sentence. We therefore need to extend the idea of "termination" a little, like this: Token Termination Rule: Report a token when in a halt state and we cannot find a transition from the current state on the next character. Restart the FSM for each token.

Look at the FSM given above, and the sentence


if1+if

This should be decomposed into four tokens: if, 1, +, if. Notice how the FSM behaves with this input sentence. Starting from state S, it will go to state A, then state D on the characters i f. State D is a halt state, and has no possible transitions. We conclude that the first token is if, since this is associated with state D. Notice that the FSM is expected to halt although there are more characters to be scanned-this is another little extension to the basic FSM discussed in chapter 3. For the next token, the FSM is restarted from state S, but now sees character 1. It goes to state C, which is also a halt state, with no possible transitions. So "1" is a token. On restarting from state S, the machine sees character "+", and goes to state B. The token is clearly a

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 64

"+". The last token, if, is found through states S, A, D as before.

Problems with the Plan


This ideal plan is unfortunately flawed as it stands. Consider an FSM intended to recognize just the two tokens = and = =, both of which appear in C. The reduced machine will look like the one in figure 3. Note that this FSM has two halt states, one (state A) = = S A F corresponding to token =, and the other (state F) corresponding to token = =. If we halt at A each time, then our scanner will Fig. 3 interpret the sequence = = as two separate = tokens. Clearly, we need to proceed to F on token = =, and stop at A only if the first = character is not followed by another = character. We need a general rule to cover this situation, since our FSM will contain many halt states (in general) in which it's possible to continue on certain tokens. Here's a rule that works well for most modern languages:
Longest Sequence Rule: Choose the longest possible character sequence compatible with a token.

This rule essentially says that if we recognize character =, we should check for a second character =. If we see the second one, then token = = has been recognized. Otherwise, token = is recognized. Even if we reach a halt state somewhere in our state diagram, we are expected to peek at the next character, and see if there's a way to get to some other state on that character. We only halt if we're forced to through the lack of a suitable transition. Note that if two separate tokens form a valid token when placed together, it's necessary to write them with a separating space or tab. For example, characters ":=" written together with no intervening whitespace forms the Pascal assignment operator. With whitespace between the ":" and the "=", they should be recognized as the two tokens : and =, which are also Pascal tokens. The Pascal language syntax is such that token = can never follow token :, but we insist that the lexical analyzer be sufficiently robust to make this discrimination. The error in syntax then becomes a parser error rather than a lexical error.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 65

FSM Scanner Rule


The longest sequence rule can be expressed in our FSM as follows: Longest Sequence Rule (details): In any halt state with one or more out-transitions, inspect the next character C in the input stream. If C is compatible with one of the out-transitions, take the transition. Otherwise, halt, i.e. return with the indicated token

Note that a halt state is associated with the end of some token, which was scanned by starting at the start state and following transitions through to this state. We have therefore scanned through some sequence of characters which make up the token. This sequence may be a literal token, standing for itself, or a lexical token, one of a set of tokens associated with some one token code. Even though we have reached a halt state, we need to inspect the next character to see if there's a transition from that state.

Problems with the Longest Sequence Rule


Suppose our language consisted of the tokens +, *, and +*+. Note that +* is not a token. A reduced FSM for this language will look like figure 4. Consider the input string +**. This string should be resolved into the tokens +, , . However, the FSM will follow the state sequence S-A-B, but then find that state + * S + B A C B has no transition on . This is normally considered to block the FSM at state B (B isn't a halt state, and there's * no transition on * out of state B). However, it's obvious F that the FSM should have stopped at state A instead. Fig. 4 How can this problem be repaired? What's needed is some way of backing up to the last halt state seen during the state transitions. That would be state A, which represents token +. We therefore need a backup rule, which we express as follows: Backup Rule: If the FSM blocks at state sn on some state sequence s0, s1, s2, s3, , sn, then find the largest k<n such that sk is a halt state. Rewind the input list by n-k characters. If k = 0, then declare a lexical error.

This "backing up" operation will not have to be done very often, because such peculiar tokens occur very rarely in most languages. But the mechanism should be provided in any case, for the sake of generality. We clearly don't have to keep a complete trail of states during a token recognition; that would require some kind of pushdown stack and additional program overhead. All that's needed is a variable that holds the last halt state passed through during the token scan, and a variable that counts the number of characters scanned upon reaching that state. Upon starting a token, the last halt state will be NULL, and

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 66

the number of characters scanned will be 0. The start state S may theoretically be a halt state, i.e. associated with some token. This could happen in a reduced machine in which there's a loop from S to itself, or from some other state back to S. State S will then appear in the state sequence upon traversing the loop at least once, and the characters scanned will be greater than 0. However, we must disallow the empty token in any practical lexical analyzer, i.e. a token consisting of no characters. Notice that allowing an empty token amounts to having the start state also be a halt state. This kind of token could conceivably appear any number of times between other tokens, and there's no way to make this situation deterministic. We therefore look at the number of characters scanned as a way of deciding whether an error has occurred. If the number of scanned characters is 0 since passing through the last halt state, then we truly have a lexical error. That gives rise to the following rule:
Lexical Error Rule: The backup operation will examine the "most recent" halt state, and the number of characters scanned upon reaching that state. If the "most recent" halt state doesn't exist, or the number of characters is 0, then we have a lexical error.

Lexical Error Backup Example


Assume that our language consisted of the two tokens +*+ and *. The FSM would then look like figure 5. Note that this will successfully recognize either of these tokens. However the input sentence +** will stop at state B, causing a backup to state S, and a lexical error, since +* is not a token.

A backup operation during lexical analysis is clearly going to be clumsy. It requires that we keep a trail of the * operations as we move along some path in the FSM. F We'd prefer not to do this unless it's absolutely necessary. Fig. 5 The need for a backup operation in an FSM will appear if there's some path in the machine from start in which a halt state is later followed by a non-halt state. (The path, of course, must follow the arrows from state to state, and needn't trace a cycle more than once). If this situation doesn't appear at all, then the machine implementation becomes much easierno backup mechanism need be provided, and if a transition from a non-halt state is impossible, that's a syntax error. A more common situation is one in which a backup may be required along certain paths from start to a halt state, but not in all. That suggests that if a state P is reached beyond which no backup is required, then the subsequent transitions can be made in the absence of a backup strategy. These considerations suggest optimizations in the lexical analyzer, which we have not exploited in the lexgen system described later.

When is a Backup System Required?


C

Lexical Error Recovery


When a lexical error is detected, an error message should be generated that points to the offending character. The scanner should then recover, i.e. continue scanning until a legal token is discovered. It
Chapter 3: Lexgen: A Lexical Analyzer Generator , page 67

can best do that by skipping the offending character and starting over at the start state, regardless of where it is in the state set. Note that with our backup system, a lexical error can only occur through a backup to the start state, with no characters scanned. As a simple form of error recovery, we should consider skipping characters until a character is seen that agrees with one of the start state transitions. We are then guaranteed some progress before a second error might be reported. This strategy will help reduce a volume of lexical errors that might occur from a sequence of invalid tokens.

Precedence of Literal over Lexical Tokens


Suppose that our language consisted of the following tokens, the first one of which is a kind of identifier:
(a|i|f)+ if fi iff space ; a subset of the identifier tokens

; space character, for whitespace

Notice that the literal tokens if, fi and iff also fit the pattern of an identifier. So there's a built-in conflict. But let's build a DFSM from these expressions. Our non-deterministic machine is expressed by the following table. Notice the column carrying a token name. We need to merge it along with the state names during machine reduction. Here, state A corresponds to the lexical token (a|i|f)+, which is a kind of identifier. State C corresponds to the literal token if, state F corresponds to the literal token iff, state H to token fi, and state I to a whitespace token space.

state S (A) B (C) D E (F) G (H) (I)

a A A

i ABD A

f AG A C E F

space I

reached? token Identifier


if

iff

H
fi

whitespace

This is clearly nondeterministic, due to the multiple transitions out of state S, on characters "i" and "f". We proceed to reduce the machine by following the process described in the last chapter. The only difference is that we also merge the token column on a state merger. For example, when we form state AG, it not only becomes a halt state (because A is a halt state), but it receives the token identifier (because identifier is attached to state A). Of course, only halt states have tokens attached to them. Other states are transitional and represent no particular token.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 68

state S (A) B (C) D E (F) G (H) (I) (ABD) (AG) (AH) (ACE) (AF)

a A A

i ABD A

f AG A C E F

space I

reached? token identifier if

iff H fi whitespace identifier identifier identifier, fi identifier, if identifier, iff

A A A A A

A AH A A A

ACE A A AF A

We can now delete states B, C, D, E, F, G and H, since they cant be reached from S. state S (A) (I) (AG) (AH) (ABD) (ACE) (AF) a A A A A A A A i ABD A AH A A A A f AG A A A ACE AF A space I reached ? token

identifier whitespace identifier fi or identifier identifier if or identifier iff or identifier

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 69

Notice that the token column carries some composite names. For example, state AH is a id id, if merger of states A and H. Both are halt states. f ABD ACE A is associated with token identifier, while H is associated with token fi. (These are both f i a,i id compatible with their definitions). So we show a a,i,f them in the table as the "composite" token S A AF {identifier, if}. f a,i,f This machine as a graph is shown in figure id, iff a,i,f 6. id stands for identifier. a,f space i The reduced FSM has eight reachable AG AH states, S, A, I, AG, AH, ABD, ACE and AF. I id, fi Of these, all but S are halt states, indicating id whitespace that we've scanned a token. However, only Fig 6 states A, I, AG and ABD unambiguously designate a single token. The other three states (AH, ACE and AF) are associated with two tokens, which were acquired through the state merger operation. For example, state AH is associated with both token fi and identifier, (a|i|f)+.

Resolving a Token Ambiguity


What are we to do about this token ambiguity? Of course, we understand that the underlying problem is that the identifier token (a|i|f)+ contains each of the explicit tokens fi, if, iff, so we shouldnt be surprised that the reduced machine exhibits an ambiguity. One way out is to redesign the regular expression for identifier so that it somehow describes all the identifiers we want, but excludes these particular ones. Thats very hard to do, as the reader will discover if s/he tries it in this simple case. Its much easier to provide a general regular expression for some lexical token, then separately list the tokens that we dont want included in the end. In one scenario, suppose that we are in state AH, and can continue with the next character, then by the longest token rule, we should do so. The state table says that a transition to state A is possible on character a, i or f, but not on character space. That corresponds to accepting an identifier that happens to start with "fi". In another scenario, suppose that we are in state AH, and a space is the next character. Then we are forced to choose between an identifier and the literal token fi. It's clear that we should choose the literal token in preference to a lexical token, i.e. one formed through a regular expression. We therefore can frame a preference rule as follows: Literal/Lexical Preference Rule: When a token conflict between a literal token and a lexical token occurs in a halt state, choose the literal token in preference to the lexical token. The preference rule won't necessarily solve all conflicts. For example, suppose two different lexical tokens are associated with the same halt statewhat then? Well, that should be considered a lexical design error. Both regular expressions apparently contain some string in common. The language designer needs to examine the two lexical token regular expressions involved in the conflict and work out a compromise between the two so that no such conflict occurs. Chapter 3: Lexgen: A Lexical Analyzer Generator , page 70

For example, in C, numbers can be expressed in octal base 8 or in the decimal base 10. They look the same, except that octal numbers must start with digit 0, while decimal numbers start with some other digit. Thus 0332, 077, and 0022 are considered to be in base 8, while 332, 77, and 22 are considered to be in base 10. A single digit 0 has the same value either way. Its possible to design two regular expressions, one for octal and the other for decimal, like this:
0[0-7]* ; base 8 [1-9][0-9]* ; base 10

These clearly exclude each other, since the initial character sets are mutually exclusive. However, we can also just consider both number forms to fit the regular expression
[0-9]+

and then discriminate between the two bases through a lexical function written for such numbers. A conflict between two literal tokens is impossible, unless the same literal token appears twice in the lexical file. (Why is it impossible, otherwise?)

Finding and Combining Equivalent States


We need to expand our definition of equivalent states and distinguishable states somewhat. Recall that two states are considered equivalent if their transitions can be pair-wise matched to equivalent states. We need to prevent many of the halt states from being merged through an algorithm that considers them equivalent. Notice that states A, AG, AH, ABD, ACE, and AF in the previous example are equivalent if we ignore the fact that they are associated with different tokens. They are all halt states, and they each have transitions on characters a, i, f to equivalent states, namely A. If we take the token association into account, we should not consider them equivalent. Thus states AH, ACE and AF are separable and can't be merged with any of the others. What about states A, AG and ABD, which are each associated with token identifier? Well, on character i, state A goes to A, while AG goes to AH. That distinguishes states A and AG. States AG and ABD are similarly distinguished. Then, on character f, state ACE is distinguished from A and AG. Thus all three of these states are distinguishable, and we conclude that no equivalence reduction is possible. That's not always the case, of course. It's still possible that some states can be merged, but only if they cannot be distinguished either by their transitions or by their associated tokens. That gives rise to yet another rule: Distinguishability Extension: Two states P and Q are distinguishable if they are associated with different tokens or if their transitions cannot be matched pairwise to distinguishable states.

End of File
An end-of-file EOF can potentially occur anywhere within an FSM operation, whether in a halt state or not. It's clear that the scanning will be at an end when EOF is seen. The question is whether it comes at a legal place in the state sequence or not. For example, if token := is in the language, but : is not, and an EOF occurs after character :, then an error should be generated. We find that there are three situations to consider: EOF occurs in the start state. This must be considered a normal termination of the program. The

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 71

scanner should generate an end-of-file token. In fact, we will always make sure that EOF is a legal transition from the start state in any lexical analyzer, by the way in which it is constructed. EOF occurs in some other state A, and state A is a halt state for some token. Then the token is clearly recognized. EOF should not be a transition from any state other than the start state. The next token will uncover the EOF. EOF occurs in some other state A, and state A is not a halt state for some token. The FSM program will clearly back up to the last discovered halt state, or to the start state, whichever was last seen. These are clearly covered by one of the first two situations above. A backup to some legal halt state will result in the recognition of a token. A backup to the start state should generate a lexical error recovery, skipping characters until something legal is seen. That legal character may be EOF.

Character Backup and End-of-Line


We'll assume that no token can be subdivided by an end-of-line EOL. The end-of-line character can itself be a token, but it cannot be part of any token. This is the case for most modern free-form languages, and also for older line-oriented languages. With this assumption, we may read input files line-by-line, rather than character by character. In any case, character fetching can be insulated from any lexical analyzer by a suitable interface. Since we choose to use the C++ function cin.getline to read a whole line at a time, the current line will always be in a buffer for reference and useful for backing up. Given a line buffer, we can manage our own character putback operation very easily, given that we never need to back up into the previous line. The token FSM must always start on a fresh token, and a token will either terminate at the end of a line, or will have to start at the beginning of a line. This strategy will also work very well with whitespace, provided that a whitespace "token" is considered to be a space, a tab, a line ending, or a comment. Although our backup strategy is defeated by a comment extending over several lines (unlike tokens), an invalid comment will simply cause a backup to the beginning of the current line, and retry on some token form. The recover action will be poor, but can hardly be improved by any other strategy. An "invalid comment" in Pascal or C can only be a comment that contains some character other than a printable ASCII character, by the way we write its regular expression.

Building a Lexical Analyzer


If we are to automate a lexical analyzer, we need a language that describes the tokens, token codes and token classification, and a program that can translate that language into a working lexical analyzer. In particular, we need a language that can describe regular expressions, noting that it must be powerful enough to describe the metasymbols used in regular expressions, as well as non-printable ASCII characters. We will also describe a shorthand notation that describes a range of ASCII characters, for example all lower-case letters. The token description will be written as a text file that will be interpreted by a special program (lexfile) that generates an FSM recognizer program. Given a lexical description file (for example, Pascal.lex), it will generate a lexical analyzer source file in C++ form which we might call Pascallex.cpp. File Pascellex.cpp in fact only contains the two member functions initNames and getToken. A working lexical analyzer requires additional library functions drawn from qparser/lib/qplib.lib. Directory lextest contains makefiles and other tools to build a lexical analyzer program. An example of such a lexical file is qparser/lextest/Pascal.lex.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 72

Class Cinput
Class Cinput, found in files qparser/lib/cinput.h and qparser/lib/cinput.cpp, provides tools for reading text file input lines, fetching characters from that line, and backing up as required during parsing and lexical error recovery. It also notices an end-of-file, and delivers the ASCII character 0x4 on reaching an end of file. Cinput can also accept a null-terminated string as the input, and that string may contain end-of-line characters, tabs, etc., just as a file might. Finally, Cinput will notice if a line ends in a backslash character \. If so, the backslash is removed and the next line is concatenated onto this one. This convention would be very difficult to support with token regular expressions, but does provide a uniform way to split long lines into shorter ones. Cinput is used by first instantiating an object from it, using the default constructor. You then open a file or a string. This can be closed at any time, but normally only at the termination of the lexical analyzer. Only one file can be open at any one time; if you need more than one file, you must instantiate more Cinput objects. Once a file is open, function Cinput::getChar returns the "next" character. This will be the first character of the file or string on the first call. It also advances to the character following this one. Function Cinput::putBack(n) will rewind the character stream by n characters. The default is 1, which merely reverses the effect of the previous getChar, but you may also rewind more than one character. The rewind can only back up as far as the current line's origin. Cinput::getChar will deliver a line feed character at the end of the current line. Now a source text line may be terminated with a backslash; if so, getChar will ignore both the backslash and the following line feed character, effectively concatenating the current line with the next one. Several such lines can be concatenated into a single line for getChar purposes. In this way, our lexical analyzer is not expected to deal with a line terminated with a backslash. Several other functions are also provided, as follows: Cinput::peekChar returns the next character without advancing through it. Cinput::catchToken marks the origin of a new token. This marked position will be returned later through function Cinput::getTokenPntr. The length of the token will be returned later through function tokenLen. Cinput::setEcho causes printing of the input source line to cout. It's better to use the function getLineCallback instead; this is called just after each file source line is read, and can be used to format and print the source line. Cinput::atEOF returns 1 if the next character is an end-of-file, and 0 otherwise. Usually the lexical analyzer will simply watch for the appearance of the end-of-file character, EOFchar, instead.

Class Clexf
Class Clexf describes a complete lexical analyzer. It will contain a queue of Ctoken objects, and is associated with one file or input stream. (The queue is defined by class CtokenQ). Clexf supports functions for fetching tokens, used primarily by the parser, and functions for filling tokens from the input stream. It inherits class Cinput, which provides the low-level character-fetching tools described above. Most of the Clexf member functions (and the class definition) are found in qparser/lib/lexf.h and qparser/lib/lexf.cpp. These are the target language-independent portions of a lexical analyzer.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 73

Two important member functions are generated for a particular target language: Ctoken::getToken and Clexf::initNames. getToken. initNames fills a token names array needed for debugging and other purposes. These will appear in a single file generated by the programs lexfile or lextbl. The general idea here is that the user will write or modify a lex file or a grammar file that specifies the tokens of the language. This file is then used to generate a reduced FSM, which becomes expressed as a C++ program. Weve described how this is done in an abstract way. The remainder of this chapter will give more details for doing this at a practical level.

Class Ctoken
Class Ctoken, found in files qparser/lib/lexf.h and qparser/lib/lexf.cpp, carries enough information in its data members to fully describe one token. It also supports several member functions used to construct the tokens data (if any), and access it in various ways. Here's a simplified version of class Ctoken:
typedef string::const_iterator csi;

class Ctoken : public Csem { friend Clexf; private: TCodeType tCode; // token code associated with the token union { long int ivalue; // CHAR .. ULONG double dvalue; // FLOAT .. DOUBLE }; string svalue; // IDENT, STRING, CCODE Clexf *parent; // the parent lexical analyzer bool isCopy; public: Ctoken() : Csem(OTHER), parent(0), tCode((TCodeType) 0), isCopy(0) {} Ctoken(Clexf *p) : Csem(OTHER), parent(p), tCode((TCodeType) 0), isCopy(0) {} Ctoken(TCodeType tc, Clexf *p); // creates a new token Ctoken(const Ctoken &cv); // copy constructor

long getInteger(void) const; // for any of the integer types double getDouble(void) const; // for any of the real types // getStringValue works for any of the types const string getStringValue(bool quoteIt= 0) const; void setValue(double dv); void setValue(long int dv); void setValue(const string& sv); bool isString(void) {return (semt == IDENT || semt == STRING || semt == CCODE); } // TRUE if svalue used bool isInteger(void) const {return (semt >= CHAR && semt <= ULONG);} bool isReal(void) const {return (semt >= FLOAT && semt <= DOUBLE);} virtual void virtual int dump(ostream& out= cout) const; // describe token to out classCode(void) const {return CTOKEN;}

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 74

// // // // // // // //

The 'getXXX' functions with parameters are called from the generated lexical FSM and are expected to convert the string into decorations for this Ctoken object. tp points to the beginning of the string (a string iterator), and endtp to just past the end semt and tokenCode should be set by this, not left to chance. The 'getXXX' functions with an int parameter are expected to create a dummy token with that tokencode. This is for semantics error recovery.

virtual void getWhiteSpace(csi tp, csi end, bool casesens) {semt= OTHER;} virtual void getWhiteSpaceC(int code) {semt= OTHER; tCode= code;} virtual void getLiteral(csi tp, csi end, bool casesens) {semt= RESWORD;} virtual void getLiteralC(int code) {semt= RESWORD; tCode= code;} virtual void getEOL(csi tp, csi end, bool casesens); virtual void getEOLC(int code); virtual void getEOF(csi tp, csi end, bool casesens); virtual void getEOFC(int code); // C lexical functions // C identifier virtual void getIdent(csi tp, csi end, bool casesens); virtual void getIdentC(int code); // C integer virtual void getInteger(csi tp, csi end, bool casesens); virtual void getIntegerC(int code); // C float virtual void getFloat(csi tp, csi end, bool casesens); virtual void getFloatC(int code); // C string virtual void getString(csi tp, csi end, bool casesens); virtual void getStringC(int code); // C character virtual void getCharacter(csi tp, csi end, bool casesens); virtual void getCharacterC(int code); // Pascal lexical functions // Note: Use getIdent for a Pascal identifier, with appropriate casesens // Pascal integer virtual void getPInteger(csi tp, csi end, bool casesens); virtual void getPIntegerC(int code); // Pascal float virtual void getPFloat(csi tp, csi end, bool casesens); virtual void getPFloatC(int code); // Pascal string virtual void getPString(csi tp, csi end, bool casesens);

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 75

virtual void getPStringC(int code); };

This header file carries a typedef of a string iterator, csi. This is essentially a const char* pointer to characters in a string. The lexical analyzer will manage a small queue of Ctoken objects through class CtokenQ. This makes it possible to scan ahead by a few tokens, or to insert a new token in the token queue. Tokens from the input stream are normally only read when absolutely required by the parser. This makes the scanner and parsing system especially friendly when used in an interactive way. Ctoken carries enough fields to support any integer (as a long int), any floating-point literal (as a double), or any string or identifier (as a char array allocated from the heap). This is sufficient to support almost any modern language, since more complicated objects can be built up from these, and the base language rarely requires more complicated token forms. Ctoken carries a token code, tCode, which is a unique integer assigned to each literal and lexical token. It also carries a semType, semt, in its base class Csem, which describes the token in more detail. This is an enumerated type, described in file qparser/lib/table.h. For example, the semType of some literal integer may be CHAR, UCHAR, SHORT, USHORT, etc., depending on the relative size of the integer. The parent data member points to a parent object (class Clexf), which is the lexical analyzer to which this token belongs. This is set when the parent object is created, and should not be changed. Given a Ctoken object, you can obtain a lexical tokens value through one of the accessor functions getInteger, getDouble or getStringValue. The first two are only valid for an long int or double value, respectively. getStringValue works for any token, returning a copy of a string representing the token. If you pass true for the boolean parameter quotit, then the returned string is quoted in C fashion. This includes any internal characters that require escapes, for example, a C quote mark, a backslash character, or a non-printable character.

Describing the Lexical Analyzer FSM through Regular Expressions


We now describe the lexical analyzer regular expressions as used in Lexgen. Weve reviewed regular expressions in the previous chapter. We now discuss just how to represent tokens in a textual manner. A full implementation of a practical lexical analyzer metalanguage requires all of the following: A way to describe whitespace. We can use a regular expression to define white space, but we need to specify that no token will result from the description; whitespace is supposed to disappear, and not appear as a token for the parser to deal with. A way to describe a line ending and an end-of-file. A short-hand way to describe a subset of ASCII characters. The bracket convention [ ... ] described in the previous chapter will be used for this. A way to associate a regular expression for a lexical token with a generic token name and a lexical function. A short-hand way to describe literal tokens which may incidentally contain regular expression metacharacters, for example "*", "+", etc. are tokens in most languages. These would be ugly to describe otherwise. A generator program to convert these specifications into source C++ code needed for a working lexical analyzer. The resulting analyzer may be built into a compiler or interpreter, or may run as Chapter 3: Lexgen: A Lexical Analyzer Generator , page 76

a standalone program. The generator program need to construct an FSM with tokens associated with halt states, then reduce it, and resolve any token conflicts through the rules given earlier. Finally, the generator is expected to write valid C++ source code with appropriate comments for the particular lexical analyzer desired by the user.

Regular Expression Metalanguage


We want to describe a regular expression in an ordinary text file using only printable characters and no control characters. That will permit us to generate them with an ordinary technical editor, rather than be forced to develop a special menu-driven editor for the purpose. We've discussed these in the previous chapter. By way of review, a regular expression consists of any one of the following forms: r1 r2 r* r+ r? r1 | r2 c [] (r) representing the concatenation of regular expressions r1 and r2, representing zero-or-more concatenations of the regular expression r, representing one-or-more concatenations of the regular expression r, representing zero or one instance of the regular expression r, representing one or the other instance of the expressions r1 or r2, where c is any printable ASCII character, representing any one of the ASCII characters listed in the brackets, representing the regular expression r.

Precedence Rules
( ) enclose some expression, and the interior form take on higher precedence than any operator affecting the parenthesized expression. [ ] enclose some character set, and has high precedence. *, + and ? bind to the immediately preceding expression, and have the next-highest precedence. concatenation has intermediate precedence | (alternation) has the lowest precedence For example, the expression
abc|def

is equivalent to
(abc)|(def)

but
abc*de

is equivalent to
ab(c*)de

Bracket Operator
The bracket operator [ ] is very useful as a way of describing one character out of a set of ASCII characters. We use it to describe some (possibly large) range of ASCII characters in a concise way. Here are some examples that illustrate the notation:
[k] [ ] [abc] represents the single character k. represents one space character. represents one of the characters a, b, c. not represent the token "abc". Note that this does

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 77

[a-z]

[a-zA-Z] [0-9] [0\-9]

[\]] [\t\n] [ -~] [\4]

represents one of the characters a, b, c, , x, y, z. The dash character provides a shorthand way of describing a potentially large range of possible characters. represents any one letter. represents any one digit character. represents any one of the three characters 0, -, 9. The backslash causes the dash character - to lose its meaning as a "range" metasymbol. represents the character "]". Note that ] must be escaped to avoid confusion with the metacharacter "]". represents a tab or a line feed character. represents all the printable ASCII characters from space (32) through tilde (126). represents the ASCII control character 4. A number of 1, 2 or 3 digits following the backslash is interpreted in octal. By convention, we use[\4] as an end-of-file character.

We could use the alternation operation | to describe any of these, but the required form is clumsy. For example, the set [a-e] is equivalent to
(a|b|c|d|e)

You can in fact use this notation in lexgen, but we obviously don't want to have to type out something like [a-zA-Z0-9] the long way.

Single Characters
You can use any printable ASCII character or sequence of printable ASCII characters without brackets in a regular expression, provided that the character: is not a space, tab or line feed, and is not a metacharacter. The metacharacters are: ( ) * + \ ? | [ Use the bracket notation or the escape character if you need to represent an ASCII space or metacharacter. For example, don't write "(" by itself, write it as \( or as [(].

No Spaces
A regular expression must be written with no spaces. If you must include a space, write it within brackets, like this: [ ]. Each regular expression must be written in one line, and not be broken by a line ending. You can break a line into two lines by ending the first one with backslash \. This implies that a literal line ending character must be written \n or [\n].

Example 1 - an Identifier
A complete C identifier regular expression can be described as follows:
[_A-Za-z][_A-Za-z0-9]*

In C, an identifier can start with a letter or an underbar. It may continue with underbar, letter or digit. This is expressed in the regular expression above by the opening bracket, which represents the first character, followed by zero or more of the second bracket.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 78

Example 2 - a Pascal Integer


[0-9]+

This obviously describes one or more digit characters, or an integer.

Example 3 - a C Integer
C integers can have more complex forms. Here are examples of the basic forms:
1568 a decimal integer; must not have a leading zero 0277 an octal integer; must have a leading zero 0xFEE a hexadecimal integer; must have a leading 0x 15L becomes a type long number 15U becomes a type unsigned long number

The regular expression given below describes all these forms:


(([0-9]+)|(0[xX][0-9a-fA-F]+))[lLuU]?

Octal and decimal numbers are carried in the first half of the alternative, i.e.
[0-9]+

Hex numbers are carried in the second half of the alternative, i.e.
0[xX][0-9a-fA-F]+

Notice how the hex notator "x" may be upper or lower case. Also the subsequent hexadecimal characters are drawn from the digits and the upper or lower case letters A, B, C, D, E and F. The last part of the regular expression,
[lLuU]?

describes an optional tag at the end of any number, to mark it as a long and/or unsigned.

Example 4 - Pascal Whitespace


Pascal whitespace is described by the following regular expression:
[ \t\n]|\(\*[\t\n -~]*\*\)|\{[\t\n -|~]*\}

This can be broken into three alternative whitespace elements, as follows:


[ \t\n] \(\*[\t\n -~]*\*\) \{[\t\n -|~]*\} space, tab or line ending (* *) style of Pascal comment { } style of Pascal comment

Notice that the characters (, ), and *, which are all needed in comments are metacharacters and require an escape character. In the second and third forms, the material between the opening and closing comment form is zero or more arbitrary printable characters, including tab, line ending, space and any other printable character. Rejected are any other control character or the ASCII extended characters (above 0x7E).

Example 5 - Pascal Quoted String


A regular expression describing a Pascal quoted string is:
'([ -&(-~]|'')*'

In Pascal, a quoted string is opened and closed with a single quote mark ('). In order to embed a Chapter 3: Lexgen: A Lexical Analyzer Generator , page 79

single quote mark in a string, it must be doubled. Hence a string is zero or more of either a doubled quote mark, or any printable character from space to tilde. We've written the latter sequence this way in order to avoid a conflict on a single quote:
[ -&(-~]

The ASCII sequence (look at the ASCII table in chapter 1) is space & ' ( ~. This clearly omits a single quote from the set in the brackets. Excluding single quote from this form avoids an ambiguity that would otherwise arise from the paired quote marks and a terminating single quote mark.

A Complete .lex File


We can now describe how to write a token description text file that can be transformed into a lexical analyzer program. Here's a typical file, C.lex, found in directory lextest; this describes the C lexical tokens, and a few C literal tokens:
# C.lex

# Case sensitivity casesens on # White space WhiteSpace getWhiteSpace [ \t\n]|/\*[\t\n -~]*\*/|//[\t -~]*[\n] # end-of-file EOF getEOF [\04] Identifier getIdent [_A-Za-z][_A-Za-z0-9]* Integer getInteger (([0-9]+)|([0][xX][0-9a-fA-F]+))[lLuU]? Real getFloat [0-9]+(([.][0-9]+)|\ ([eEfF][\+\-]?[0-9]+)|([.][0-9]+[eEfF][\+\-]?[0-9]+)) String getString "([ -!#-~]|\\([a-z"'\\]|\ x[0-9a-fA-F][0-9a-fA-F]|[0-7][0-7]?[0-7]?))*" Character getCharacter '([ -&(-~]|\\([a-z"'\\]|\ x[0-9a-fA-F][0-9a-fA-F]|[0-7][0-7]?[0-7]?))' # + + + + + + + + Literal tokens -- regular expression rules do NOT apply here if for switch = == : ? *

Here are the rules for this file. We will use the suffix .lex for such files. # in the first column means that the line is a comment, extending to the end of the line. A fully blank line is ignored. A line in this file can be folded into the next line by writing a backslash (\) at the end. It is then assumed to continue on the leftmost character of the next line. We've done this in the above

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 80

example for the String and Character regular expressions. casesens on (case-sensitivity ON) means that upper and lower-case letters will be distinguished in each of the lexical and literal tokens, from this point on. You can switch case-sensitivity off and on as needed throughout the file. For example, in Pascal, the literal tokens if, for, then, etc. can also be written IF, FOR, THEN, If, iF, For, etc. So can identifiers. But lower case and upper case letters in a string definition must be preserved in case. A "+" in the first field means the second field specifies a literal token. The + must be followed by at least one space. The literal token starts with the first non-space character following the "+" and ends at the line ending or at the next space, whichever occurs first. A literal token ending in the backslash "\" can be written by adding a space after the backslash. If you don't do this, this line in the file will be appended to the next line. If a literal token contains any letters, and you are in case sensitive mode, then it must be written in exactly the case expected during scanning. For example, if we added the literal token IF, then there could be two tokens if and IF separately recognized by the lexical analyzer. A name in the first column, i.e. Identifier, (rather than "+") means that the token is a lexical token. The second field specifies the lexical function to be called just after the token is scanned, i.e. getIdent. (Lexical functions are discussed in a following section). The third field must contain a regular expression that describes the token, i.e.
[_A-Za-z][_A-Za-z0-9]*

Whitespace is treated as a lexical token, but with the special name WhiteSpace, the lexical function getWhiteSpace, followed by regular expression that defines all of the possible white space forms. By using this special name, the system will not consider white space as a token, rather something to scan over and ignore. End-of-file has the special name EOF, lexical function getEOF, and the regular expression [\4]. These conventions are built into the lexical generator code, but normally shouldnt have to be altered. This must be part of any lex file. End-of-line is supported by the name EOL, the lexical function getEOL, and (recommended), the regular expression [\n]. This should be included in the lex file only if you need to see each end-of-line, and in that case, you need to make sure that end-of-line is not in the white space regular expression. Token codes are automatically assigned in the order in which the tokens appear, starting with 1, except for WhiteSpace, which is always assigned token code 0.

Lexical Functions
The purpose of a lexical function is to translate the string form of some token into an internal form. The internal form must be supported by class Ctoken. For example, any integer is carried in Ctoken's data member ivalue. Any floating-point number is carried in dvalue. Any string must be copied to the STL string svalue. Note that these are part of a union struct; a Ctoken can therefore carry just one such token. Ctoken also requires that tCode, the token code, and semt, a semType descriptor flag, be set by the lexical function.

The prototype for any lexical function is:


virtual void Ctoken::getIdent(csi tp, csi end, bool casesens);

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 81

This must be a member function of the Ctoken class, found in lib/lexf.h and lib/lexf.cpp. Variable tp is a pointer (actually a string::const_iterator) to the first character of the token, and end points just past the last character. This token is not terminated by a null character, so the end position must be considered during a scan. tp in fact points into an input line buffer of characters read through class Cinput, found in lib/cinput.h. This buffer carries a complete input line; recall that Cinput concatenates lines terminated with a backslash. Each such line is also a string, and carries a line-feed character as its terminal character. The lexical function must also classify the object by its token number and a semantic type. The enumerated type tokenType found listed in lib/table.h, should be used for this purpose. The semantic type of a number is inferred from its absolute size and any other clues from the syntax. For example, the number 15 would be considered type CHAR, while the number 513 would be considered a SHORT. A compiler can later work out appropriate type conversions based on the number's environment, but in general, the smallest possible type should be assigned to a literal number. A quoted string should be categorized as a STRING, and an identifier as an IDENT. The lifetime of a Ctoken object may exceed that of the input line. For that reason, if Ctoken is supposed to carry a string or identifier, a fresh copy is allocated from the heap. This copy will be deallocated when Ctoken is deleted. However, in use, tokens are carried in a small queue and are reused as they are passed. Some compilers assign a separate thread to token collection. The scanner then reads tokens, pushing them into a pipe connecting the scanner task to the parser task. The parser of course read tokens from the pipe. This may appear to simplify operations, but in fact it makes any communication from the parser back to the scanner difficult (the scanner may be scanning far ahead of the parser). It also means that syntax errors will be difficult to attach to the source text, unless a pointer to the source is also carried in the queue.

Using Utility lexfile


The executable lexfile accepts a lexical description file, like the one given above. It generates a C++ source file containing a reduced FSM program, like this:
lexfile C.lex > Clex.cpp

This simple command covers a remarkably large and complex operation on the C.lex file, i.e. parsing the file, parsing the regular expressions, generating and reducing the associated FSM, detecting and reporting conflicts and errors, and, finally, writing a source file to stdout. You can view the intermediate results of these operations through a d option, explained by running lexfile with no parameters. Note that lexfile only constructs one file. A complete lexical analyzer program, for testing purposes, can be built using the makefile found in directory qparser/lextest. The makefile includes a line that calls program lexfile. You can build a complete program by copying all the files in qparser/lextest to a new directory, writing your own lexical file mylex.lex in that directory, then calling
make LEX=mylex

The generated file Clex.cpp contains two functions initToken and getToken, both members of class Clexf. Function initToken initializes some variables needed by the lexical analyzer. You can view this to determine the tokencode assignments. The member function Clexf::getToken(Ctoken &token) implements a reduced finite state machine that skips white space, complains on a lexical error, and returns after scanning exactly one token. It adjusts the object token appropriately with the correct token code, using any lexical function required. Chapter 3: Lexgen: A Lexical Analyzer Generator , page 82

An Example getToken
getToken is part of a fully generated file. It's organized as a set of C++ switch or if-then control statements, using the goto operator to move around in the machine. Each state in the FSM is represented by a clause starting a label such as ST_13, which represents state 13 in the machine. It is often a rather large function. It may not appear exactly as shown below. Here's how getToken starts:
void Clexf::getToken(Ctoken &token) { char ch; ST_0: token.clear(); tokenIndex= 0; tokenLength= 0; ch= getChar(); catchToken(); if (ch==EOFchar) { token.setsemType(EOFTOKEN); token.setTcode(EOFCode); return; } switch (ch) { case '\t': case '\n': case ' ': goto ST_0; case '\"': goto ST_1; case '\'': goto ST_34; case '*': goto ST_45; case '/': goto ST_36; case '0': goto ST_47; case '1': ... etc. case '8': case '9': goto ST_49; case ':': goto ST_39; case '=': goto ST_51; case '?': goto ST_43; case 'A': case 'B': case 'C': ... etc. case 'Y': case 'Z': case '_': case 'a': case 'b': case 'c': case 'd': case 'e': goto ST_7; case 'f': goto ST_50; case 'g': case 'h': goto ST_7; case 'i': goto ST_48; case 'j': case 'k':

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 83

case 'l': case 'm': case 'n': case 'o': case 'p': case 'q': case 'r': case 's': case 't': case 'u': case 'v': case 'w': case 'x': case 'y': case 'z': default:

goto ST_7; goto ST_46;

goto ST_7; break;

} if (lexCheck(token, true)) return; goto ST_0;

The section labeled ST0: clears the token object to a default state, reads the next character, and sets the token pointer to that character (catchToken). This pointer may be used later in a lexical function. Three parameters are initialized here: tokenIndex, which counts the number of characters passed through during a state sequence, tokenLength, which will carry the actual length of a token, and Variable tokenIndex will be incremented on each getChar call. Note that this getChar is in Ctoken, calls the Cinput::getChar, but has some additional functionality. Function catchToken notifies the Cinput class that a token is about to be scanned. That will used later to provide a csi pointer to the first character of the token, through function Cinput::getTokenPntr. The first character of some new token is always checked for an end-of-file, which will set the token object to an end-of-file state. If it's not an end-of-file, the character is available for the following section (labeled ST_0) to classify it. Each of the ST_NNN sections is in the form of a switch or an if-then statement, whichever seems to yield the least amount of source code. State ST_0 is usually the most complicated, as this one tends to sort out the major classes of token. Note that these conditionals send control off to various other clauses in the source code.

How getToken Recognizes a Whitespace Token


A space, tab, or line feed should be scanned, then send control back to ST_0, without setting token or returning. This results in skipping these whitespace characters. Skipping a C comment is more complicated, since the '/' character can launch either of two different kinds of comment, or it might be used as a token in its own right (it isn't in C.lex). Let's follow that case through to see just how this FSM discriminates these cases. Control is passed to ST_36 on character /. Here's what that looks like:
ST_36: ch= getChar(); { if (ch == '*') goto ST_6; if (ch == '/') goto ST_10;

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 84

} if (lexCheck(token, true)) return; goto ST_0;

In state 36, if character * is next, then control passes to ST_6. If character / is next, control passes to state 10. In any other case, lexCheck is called, which will examine the situation and decide if a lexical error is warranted. (It is, here, since no legal token has been seen yet). Let's look the code for state 6, given next:
ST_6: ch= getChar(); { if (ch >= '\t' && ch <= '\n') goto ST_6; if (ch >= ' ' && ch <= ')') goto ST_6; if (ch == '*') goto ST_32; if (ch >= '+' && ch <= '~') goto ST_6; } if (lexCheck(token, true)) return; goto ST_0;

Most characters return to ST_6, including space, tab, line feed, and others. We're now scanning the material past the opening "/*", so we expect a loop on any character except '*'. Non-printable ASCII characters are rejected, however, through falling into the lexCheck function. lexCheck will decide that this would be a lexical error. Character * passes control to ST_32, given next:
ST_32: ch= getChar(); { if (ch >= '\t' && ch <= '\n') goto ST_6; if (ch >= ' ' && ch <= ')') goto ST_6; if (ch == '*') goto ST_32; if (ch >= '+' && ch <= '.') goto ST_6; if (ch == '/') goto ST_0; if (ch >= '0' && ch <= '~') goto ST_6; } if (lexCheck(token, true)) return; goto ST_0;

More * characters causes control to remain in state 32. Character / causes control to return to ST_0, marking the end of this comment. All other characters return to state 6, which we've looked at above. Again, non-printable ASCII characters are rejected. Notice that any sequence of whitespace is similarly skipped over, without a return from this getToken function.

The Function lexCheck


This function is in file lexf.cpp, reproduced below. Its supposed to perform any backup operations, call a lexical function, and return true or false, depending on the validity of the token:
bool Clexf::lexCheck(Ctoken &token, bool casesens) { // called upon reaching a "halt" state in the FSM if (tokenLength == 0) { lexError(); // no token is legal here or previously

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 85

// backup to beginning of token, then skip one character if (tokenIndex <= 0) getChar(); else Cinput::putBack(tokenIndex -1); return false; // marks a lexical error } // a token is legal here or somewhere back in this scan Cinput::putBack(tokenIndex - tokenLength); // dump any surplus characters PCtoken p= getFunctions[token.tCode]; assert(p != 0); csi cp= getTokenPntr(); (token.*p)(cp, cp+tokenLength, casesens); // fix up token return true; // should be OK, will return to token caller }

This function performs any required backup to a preceding halt state; this action is controlled by the variables tokenLength and tokenIndex. The backup is actually performed by function putBack, which does nothing if its parameter is 0. The function pointer p is obtained from a token code lookup in the array getFunctions. (You can find getFunctions in the generated lexical file). This is called through the form
(token.*p)

which dereferences the function pointer and executes the call. Although it isnt obvious, this calls one of a set of lexical functions, for example getIdent, or getEOF. These are among the Ctoken function shown above. The parameters expected by each lexical function (such as getIdent) are a pointer to the first character of the token (cp), a pointer just past the last character of the token (cp+tokenLength), and whether the token is to be scanned in a case-sensitive or case-insensitive manner (casesens). Since this function is a member function of Ctoken, it must set the data members of this Ctoken object (token). A typical lexical function is getIdent, called for identifier tokens. Its prototype is given below:
virtual void getIdent(csi tp, csi end, bool casesens);

and its implementation can be found in file lib/lexf.cpp. This has to be a virtual function in order to form a pointer to it as weve done in this implementation.

How getToken Recognizes a Literal Token


The literal token = = is specified in C.lex. Let's follow its recognition. State 0 sees the first = character, and sends control to state 51, given next:
ST_51: token.setsemType(RESWORD); token.setTcode(10); // = tokenLength= tokenIndex; token.getFunction= &Ctoken::getLiteral; ch= getChar(); { if (ch == '=') goto ST_35; } if (lexCheck(token, true)) return; goto ST_0;

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 86

This state recognizes that a single = character also comprises a token. That why the token object is preset to the token code 10, the tokenLength is caught, the semt is set to RESWORD, and the lexical function is set to Ctoken::getLiteral. (getLiteral does nothing). After that, we fetch another character and test for a second = character. That would send us to state 35. However, if the next character were anything else, lexCheck is called. This time, lexCheck will conclude that "=" is a valid token (through the variables tokenLength and others). lexCheck returns 1 if a valid token is seen and 0 otherwise, so the "=" token that we've scanned is accepted. Incidentally, the character following the "=" character is put back by lexCheck. lexCheck will also call the lexical function Ctoken::getLiteral, through the function pointer getFunction. (getLiteral does nothing). Now suppose we in fact see a second = character. That takes us to state 35, given next:
ST_35: token.setsemType(RESWORD); token.setTcode(11); // == tokenLength= tokenIndex; token.getFunction= &Ctoken::getLiteral; if (lexCheck(token, true)) return; goto ST_0;

This clearly sets the token type to RESWORD, and the token code to 11. No character was fetched; none is needed, since no other token starts with "==". lexCheck will discover that no backup is needed, by comparing tokenLength with tokenIndex. It also calls the lexical function Ctoken::getLiteral through the function pointer getFunction. It happens that getLiteral does nothing.

Calling a Lexical Function for an Integer


State 15 is one of the terminal states for a C integer, given below. (There may be more than one way to terminate a given token).
ST_15: token.setsemType(OTHER); // token function should set this token.setTcode(3); // (([0-9]+)|([0][xX][0-9a-fA-F]+))[lLuU]? tokenLength= tokenIndex; token.getFunction= &Ctoken::getInteger; ch= getChar(); { if (ch >= '0' && ch <= '9') goto ST_15; if (ch >= 'A' && ch <= 'F') goto ST_15; if (ch == 'L') goto ST_11; if (ch == 'U') goto ST_11; if (ch >= 'a' && ch <= 'f') goto ST_15; if (ch == 'l') goto ST_11; if (ch == 'u') goto ST_11; } if (lexCheck(token, true)) return; goto ST_0;

We first prepare lexCheck for accepting an integer. Assuming that the integer actually terminates in this state, none of the if tests will succeed. lexCheck will call Ctoken::getInteger on the string

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 87

discovered, which should match one of the C integer forms, as described by this regular expression. Function getInteger can be found in lib/lexf.h and lib/lexf.cpp. This is a fairly complicated function, designed to scan the token string and set the data member ivalue in its parent Ctoken object. It also sets the semt flag, according to the perceived size and other information.

Building a Complete Lexical Analyzer


We've seen how utility lexfile builds a C++ source file (which we've called Clex.cpp) containing an implementation of an FSM. Clex.cpp can be tested as part of an executable program by compiling it and linking it with various library functions drawn from lib/qplib.lib. Directory lextest contains all the necessary tools, and a make file to generate a lexical test program. Copy or write a C.lex file into this directory, then call
make LEX=C

This should generate a program file lextest.exe. Note that the ".lex" suffix of C.lex is understood by make. You can now call lextest with a source file name, like this:
lextest ctest.src > ctest.out

It will look for and report all the tokens it finds in your file. Try various forms of C strings, numbers, and identifiers. Whitespace, whether space, tabs, line feeds or comments, should be ignored. It should also recognize any literal tokens you've specified in the lexical description. Lexical errors are reported in a friendly way and skipped over. A lexical error occurs on a leading character that doesnt belong to any token, or if the string doesnt fit any FSM sequence to a halt state. Program lextest.exe expects a source file name, containing comments and tokens similar to a C program. Of course, any tokens must be among the list specified in C.lex. Here's a special source file, ctest.src:
/* A lexical test file for ctest.src tokens, numbers and strings */ab/* two */15 // various number forms x1 15 76.76 0377 0xFFC8 22L 22u 75F3 75.15E22 3E-5 3E+5 // various string forms with escapes "\a\b\c\d" "a" " e's \" \" \n\r\f\'" "string with tilde characters ~ \x7E \x7e \176 " // character forms 'a' '0' '\\' '\'' '\"' '\r' '\0' '\n' // various special tokens and reserved words if :?* for==switch:= ::==: // lexical errors -- these are not tokens then .$

Notice that this contains both kinds of C comment, including some that overlap two lines. It also contains samples of the C special escaped string and character forms, for example,
"string with tilde characters ~ \x7E \x7e \176 "

Various number forms are included. This file should be accepted by lextest.exe (except for the last line) and produce the following report:
; 1: /* A lexical test file for ctest.src ; 2: tokens, numbers and strings

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 88

; 3: */ab/* two */15 Identifier ab Integer 15 ; 4: // various number forms ; 5: x1 15 76.76 0377 0xFFC8 22L 22u 75F3 75.15E22 3E-5 3E+5 Identifier x1 Integer 15 Real 76.76 Integer 255 Integer 65480 Integer 22 Integer 22 Real 75000 Real 7.515e+023 Real 3e-005 Real 300000 ; 6: ; 7: // various string forms with escapes ; 8: "\a\b\c\d" String "\a\bcd" ; 9: "a" " e's \" \" \n\r\f\'" String "a" String " e\'s \" \" \n\r\f\'" ; 10: "string with tilde characters ~ \x7E \x7e \176 " String "string with tilde characters ~ ~ ~ ~ " ; 11: // character forms ; 12: 'a' '0' '\\' '\'' '\"' '\r' '\0' '\n' Character 'a' Character '0' Character '\' Character '\'' Character '\"' Character '\r' Character '\x00' Character '\n' ; 13: // various special tokens and reserved words ; 14: if :?* for==switch:= ::==: Literal if Literal : Literal ? Literal * Literal for Literal == Literal switch Literal : Literal = Literal : Literal : Literal == Literal : ; 15: // lexical errors -- these are not tokens ; 16: then .$ Identifier then then .$ ^^ ***lexical error

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 89

then .$ ^^ ***lexical error ; 17: EOF EOF

The report may be a bit confusing to read. Each source line is included in the file, preceded by a semicolon. Each token is reported in the order in which it's found. For lexical tokens, the generic name is given (i.e. Integer, Identifier, Real, String, Character), then the specific token is printed (i.e. "22"). Literal tokens are identified by the word Literal.

Using the lexgen Tools in Your Own Code


Up to this point, we've described the theory of lexical analysis, and have reviewed some of its internal program features. But just how might it be applied to a problem of your own choosing? In particular, how would we use it to build a parsing engine that draws upon a set of tokens produced by our FSM theory? We recommend starting by building a lexical analyzer in directory lextest as described in the previous section. To do this, write your own lexical analyzer definition in a .lex file as described there, for example, mylex.lex. You can then build a running executable by calling
make LEX=mylex

If all goes well, you'll have a working lexical analyzer designed to read any source file and report its tokens. It will be in the executable file lextest.exe.

Lexgen Classes and Function Prototypes


Class Ctoken and Clexf are defined in the header file lib/lexf.h. Ctoken, as an object, describes one token during scanning. Clexf, as an object, describes a full lexical analyzer. You may have many Ctoken objects in your environment, but usually only one Clexf. (But you can have several of those, too, if you wish). Most of the common member function for these objects are defined in lib/lexf.cpp. However, some member functions are generated from a .lex file that you design. In particular, the source code for function getToken is generated especially for each lexical analyzer that you request. Its file can also be assigned a special name, so that several of these can coexist in the same directory.

An Application
Let's explore an application of Clexf through an example. File lextest.cpp can be found in directory lextest. It's not a generated file, so you can safely modify it. This contains the main function given below. It's designed to test a lexical analyzer by scanning its tokens, and reporting each one. White space is silently skipped.
int main(int argc, const char **argv) { progname= argv[0]; if (argc != 2) giveHelp("expecting a file name");

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 90

Cmylexf lex; const char* fname= argv[1]; if (lex.open(fname)) { while (!lex.atEOF()) { Ctoken& token= lex.nextToken(); token.dump(); cout << endl; lex.tokenRead(); } lex.close(); } else giveHelp("unable to open file"); return 0; }

Here's what main does: The first parameter (argv[1]) should be a file name. This is the source file for the runtime lexical analyzer. That file name (a char array) is assigned to the variable filename. Theres no point to using the string class here because the C parameter list argv is provided as a char** array. A lexical analyzer object is created: Cmylexf lex. This is a derived class of Clexf. Its sole purpose is to supply a function to print lines as read by the Cinput class. Cinput calls the virtual function getLineCallback on each input line. By overriding this, we can print input lines before they are processed by the lexical analyzer. The lexical analyzer is opened: lex.open(fname). Opening the lexical analyzer is like opening a file. It will see if the source file can be opened, then will prepare for pulling tokens out of it. If the file open was successful, a while loop is started. Each turn of this loop first sees if an endof-file is reached. If not, it accesses the next token, through lex.nextToken(). This returns a pointer to the next token, but does not advance the token reader. (If this is the very first call, this will be the first token found in the source file.) You can therefore call it any number of times, and you'll get the same token pointer. With this token pointer, you can fetch the token code or the token value (number, string, or whatever) through one of the Ctoken member functions. In this default token reader, the token is merely dumped to cout through the call token->dump(). This doesn't produce a line return, so that comes next. The call lex.tokenRead() advances the read head to the following token. When an EOF is reached, the while loop exits. lex.close() is then called, which effectively closes the input source file.

Using lexgen in General


We can now describe the token operations used in lexgen as follows: You need a Clex object declared somewhere. We've declared it in main, but you might want to have it in your own class or at a global level. The lexical analyzer lex needs to be opened with a file name, with lex.open(fname), in a manner similar to opening a file for reading. You should close it after finishing with your scanning. A pointer to the next, or current, token is returned by the call lex.nextToken(). This pointer is valid until the token is scanned through a lex.tokenRead() call. You can also get the token's code from lex.tokenNumber(). This returns a small integer

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 91

code for the token. To see what integer code corresponds to each of your tokens, find the generated lexical file, e.g. Clex.cpp, and the char array tokenNames[ ]. You'll see a list of your tokens as strings, and alongside each one is its token code as an array index. Here what this array looks like in Clex.cpp:
const char* Ctable::tokenNames[]= { /* 0*/ "WhiteSpace", // casesens= on /* 1*/ "EOF", // casesens= on /* 2*/ "Identifier", // casesens= on /* 3*/ "Integer", // casesens= on /* 4*/ "Real", // casesens= on /* 5*/ "String", // casesens= on /* 6*/ "Character", // casesens= on /* 7*/ "if", // casesens= on /* 8*/ "for", // casesens= on /* 9*/ "switch", // casesens= on /* 10*/ "=", // casesens= on /* 11*/ "==", // casesens= on /* 12*/ ":", // casesens= on /* 13*/ "?", // casesens= on /* 14*/ "*", // casesens= on };

For a lexical token (identifier, number, string), you can fetch the token's value through one of these member functions of Ctoken:
long int Ctoken::getInteger(); //if the token is an integer double Ctoken::getDouble(); // if the token is an integer or double const string Ctoken::getStringValue(); // for any token short Ctoken::getsemType(); // returns a semType code describing the token

You can fetch the whole Ctoken object through this function:
void Clexf::getToken(Ctoken &token);

Note that you need to declare a Ctoken object, then pass it to this function by reference. Your object will then be a copy of the current one in the lexical analyzer queue. When you are finished examining the current token, call lex.tokenRead() to fetch the next one. Watch for an end-of-file. This may occur on any token through some error in the source file. Clex will just continue to deliver end-of-file tokens no matter how many times you call lex.tokenRead() or lex.nextToken(), but your program presumably should give up and close the source file.

Summary
A lexical analyzer for almost any modern language can be generated automatically from a description of the tokens of the language. Tokens are either simple literal tokens, standing for themselves, or more complex lexical tokens, which stand for some large set of tokens. In either case, a finite state machine can be constructed from the set of regular expressions describing the token set. The result is a well optimized lexical analyzer. A practical analyzer requires some mechanism to interpret the lexical tokens. We do this through lexical functions, which are called after the FSM has scanned a lexical token. A lexical function converts the string form of a token into an internal form suited to the needs of the subsequent parsing system. The class Ctoken plays a key role in this process. It carries both the internal token forms, and member functions that serve as the necessary lexical functions. Chapter 3: Lexgen: A Lexical Analyzer Generator , page 92

White space, including spaces, tabs, line feeds and comments, can be considered a form of token, albeit one in which no token is in fact reported, but rather the scanned string is just skipped over and ignored. Lexical errors fall out of this analysis in a straight-forward way. Error recovery from a lexical error is similarly easy to provide. The FSM built in this way has a mechanism for scanning ahead in order to find the longest string compatible with some token. If it fails to find a long-string token, it may fall back on a compatible token of a shorter length. This lookahead mechanism is achieved with no great impact on performance.

References
Also see the Bibliographic Notes at the end of chapter 3. [1] Tony Mason and Doug Brown, lex and yacc, O'Reilly & Associates, Inc., Sebastopol, CA.

Chapter 3: Lexgen: A Lexical Analyzer Generator , page 93

Chapter 4: Context-free Grammars


W. A. Barrett, San Jose State University nch4.doc, vs 4.1

Introduction
We've explored a simple form of parsing using a finite-state automaton, defined through regular expressions or right-linear production rules. We're now going to look at a more powerful way of describing a language and develop a parser for it. This method uses production rules to describe the syntax of a language. These are also often used to define a language, in a form called Backus-Naur Form, or BNF for short.

Production Rule
A context-free production rule has the form X x1 x2 x3 ... xn i.e. left-member right-member where X is a nonterminal and x1, x2, etc. are either terminals or nonterminals. Terminals are the tokens of the language. They appear in the programs sentence material. Nonterminals stand for some set of sequences of tokens, or empty. There may be nothing in the right member. We designate this by an empty production rule: X where stands for the empty string. This is a context-free production rule. Each production rule has exactly one nonterminal as its left member, and it has zero or more terminals and nonterminals in its right member. There are more general forms of rule, for example, the context-sensitive production rule, in which the left member is some sequence of nonterminals and terminals in general. Context-sensitive rules are not generally used in language development. Recall that in chapter 3 we introduced the right-linear production rule, which is closely associated with a finite-state machine. We are now going to examine the more expressive context-free production rules, which are associated with a finite-state machine coupled to a pushdown stack, for parsing purposes.

Grammar
The syntax of a language can (mostly) be defined by a context-free grammar, which is: A finite set of terminals (tokens) and nonterminals. The nonterminals will appear as the left members of the production rules. The terminals are all the others. A set of production rules. A designated goal nonterminal. This will usually be the left member of the first production rule, by convention. Chapter 4: Context-free Grammars, page 94

Expanding a Nonterminal
Given a grammar, a string X containing a nonterminal X, and some production rule X in the grammar, we can replace X in the string by , regardless of the context strings and , yielding the derivation step X In general, , , and may contain other nonterminals that may similarly be expanded. Note that an appearance of X in the string X can be expanded without regard to its neighbors and . We just form a new string consisting of followed by followed by . If is empty, then the new string is just . Also note that and/or may be empty. In particular, we start an expansion with the goal nonterminalcall it Gwhich can also be considered the string G.

Sentential Form
If there exists a sequence of derivation steps in a grammar such that the goal nonterminal G can derive the string , which may contain terminals or nonterminals, we say that is a sentential form of the grammar. We can express the derivation like this: G * The symbol * means zero or more derivation steps.

Sentence
A sentence of the grammar is a sentential form in which all the symbols are terminals. A sentence corresponds to a program or language statement as it might be written by a programmer. By expanding all the nonterminals in some fashion, we can arrive at one of many possible sentences that the grammar can generate. Notice that a sentence is a sentential form, and therefore must be derived somehow from the goal nonterminal G by a sequence of derivation steps in the grammar.

Language
The language of a grammar is the set of all sentences of the grammar. The language may be finite, but for most common grammars, it is countably infinite.

Recursion
A grammar is said to be recursive if some nonterminal X can derive a sentential form that contains another instance of X. The recursion may be in just one rule, or it could appear among several rules. Recursive production rules cause the generated language to be countably infinitewe can use the same rule over and over again, to generate an infinite variety of sentences. Well see examples of this next.

Example
Lets look at a simple grammar - call this G0:
G E E T T F E E + T T T * F F ( E )

Chapter 4: Context-free Grammars, page 95

F id

The terminal symbols (tokens) are + * ( ) and id. The nonterminal symbols are G, E, T, F. The goal nonterminal is G. The rules E E + T and T T * F are recursive, since they expand an E (or T) into a string with another instance of E (or T). Another recursion appears in the chain E, T and F, through the rule F (E). Recursion in a grammar means that we can generate arbitrarily long sentences. It's similar to the use of the * operator in regular expressions. Here's an example expansion of G into a sentence. G clearly expands into E, i.e.
G E

E can expand into E + T or into T (your choice, since there are two E rules). Lets take the first choice:
G E E + T

Now E can expand into E + T or into T. Heres one pair of choices:


G E E + T E + T * F

T can expand into T * F or F (again, your choice).

Continue, by making more choices:


E E E E T F ( ( ( ( + T * F + T * id + F * id + id * id + id * id + id * id E ) + id * id T ) + id * id F ) + id * id id ) + id * id

Thats as far as we can go. Each of the forms is a sentential form. The last form, in which there are no nonterminals, is a sentence. This grammar expands G into all possible arithmetic expressions with the binary operators + and *, with parenthesizing. The sentence
( id ) + id * id

is said to be derived from the nonterminal G.

Lists of Things
Lets look at just two of the production rules in grammar G0. For the moment, suppose that E is the goal nonterminal and that the terminal tokens are T and +:
E E + T E T

We can derive a single T token easily in just one step:


E T

Suppose we want two Ts, separated by a + token. Heres a derivation for it:
E E + T T + T

Suppose we want three Ts, separated by + tokens. Heres a derivation:


E E + T E + T + T T + T + T

Chapter 4: Context-free Grammars, page 96

It should be clear that we can obtain any number of Ts separated by + tokens by applying the recursive rule E E + T repeatedly. In particular, if we want n instances of T, then we apply the rule E E + T n-1 times, which creates the sentential form
E + T + T + T + ... + T T + T + T + T + ... + T (n-1 T tokens)

Then apply the rule E T once to obtain the n T tokens separated by n-1 + tokens: Lists of objects appear over and over in grammars. Sometimes the objects are separated by another token, and sometimes the objects are terminated by another token. Weve just seen an example of a token separation generator.

List with a separator and terminator


To get a token terminator, the following grammar might be used instead:
E E T ; E T ;

Here, a semicolon will always appear after each T. So the legal sentences look like these:
T ; T ; T ; T ; T ; T ;

List with zero or more elements


Suppose we wish a list of T objects, possibly empty, with semicolon separators. These rules provide just that:
L L E E E E ; T T

Here are example sentences in this grammar:


(empty) T T ; T ; T

List separated with one or more semicolons


These rules generate a list of one or more T in which each of the separators are one or more semicolons:
E E S S E S T T S ; ;

Notice that the S rules generate a sequence of one or more semicolons. So we just need to insert an S between E and T in the first rule. Here are example sentences in this grammar:
T T ; T T ; ; ; T

Chapter 4: Context-free Grammars, page 97

Derivation Order and Parsing Leftmost and Rightmost Derivation


In general, a sentential form contains more than one nonterminal. Which one should be expanded first? In the end it doesnt matter, since each of the expansions takes place without regard to their context. It makes a difference in the appearance of the sentential forms during parsing, but the ultimate sentence can be derived using the same production rules in any of several different orderings. However, we distinguish two special derivation orders: A leftmost derivation chooses the leftmost nonterminal in each derivation step. A rightmost derivation chooses the rightmost nonterminal in each derivation step. For example, look at this sentential form found in deriving (id)+id*id in grammar G0:
E + F*id

We could choose either E or F for the next derivation step. E is the leftmost nonterminal, and F is the rightmost nonterminal. In our derivation given earlier, we have consistently chosen the rightmost nonterminal, so it is a rightmost derivation. Here's what it would look as a leftmost derivation:
G E E+T T+T F+T (E)+T (T)+T (F)+T (id)+T (id)+T*F (id)+F*F (id)+id*F (id)+id*id

You'll notice that we've used exactly the same set of production rules in this derivation as in the previous one, but in a different order. Also, the sentential forms aren't the same. With a leftmost derivation, we have the form (id)+T, which you won't find in the rightmost derivation.

Parse
A parse of a sentence is the reverse of a derivation. Given a sentence in the language, the parse of the sentence is the sequence of production rules that cause the goal to expand into the sentence.

Parser
A parser for a language is an algorithm that can find the parse for any sentence in the language, and that also reports error for every sentence not in the language.

Chapter 4: Context-free Grammars, page 98

Derivation Tree
A derivation can be displayed as a tree. An example is given in figure 1:
G E E T F ( E T F id This tree spells: ( id ) + id * id Fig. 1. Derivation tree for a sentence ) + T F id T * F id leaf node internal node root node

The G at the top of this tree is the trees root. It will always be the goal nonterminal of the grammar. The derivation order is downward in general. For example, G derives E through the production rule G E, so we draw a line down from G to an E. At the next level, E derives E+T, so we draw three lines down to an E, a +, and a T. Whenever a nonterminal symbol appears in the tree, it should always be expanded by choosing some production rule. Therefore the symbols that appear inside the tree are always nonterminals. We call these internal nodes. When a terminal symbol appears in the tree, it cant be expanded. It therefore becomes a lower terminus node, which we call a leaf node. The derived sentence can be read from the tree by tracing the leaf nodes from left to right. We also see that the order (leftmost or rightmost) in which a derivation is performed is not implied in the derivation tree. The final tree will be the same whether the left nonterminal or the right nonterminal (or any other, for that matter) is expanded in any particular step.

Chapter 4: Context-free Grammars, page 99

More about Left Recursion


A production rule like E E + T is said to be left-recursive. Its left member is the leftmost symbol in the rules' right member. The derivation tree of a left-recursive rule looks like this (figure 2):
E E E + T E T + T + T

Fig. 2. Left recursive tree

X * X in the grammar. Thus X might derive a Y... , which derives a Z... , which eventually derives a string X...

Notice how the E nodes form a chain down the left side of the tree, while the T objects (which may be subtrees rather than terminals) decorate the right children of that chain. Also notice that the leftmost T object (the first one in the sentence) is at the bottom of the tree, and the rightmost T is at the top of the tree. A sequence of production derivations can also yield a left recursion. In general, a nonterminal X is said to be left-recursive if there exists a derivation of the form

Right Recursion
A nonterminal X is said to be right recursive if X can derive the form X. A right-recursive production rule has the form XX For example, consider the production rules
E T ** E E T

Just as in the left-recursive case, we can obtain sequences of T objects separated by ** objects. In fact, any sentence obtained by these right recursive rules can also be obtained by the left recursive rules
E E ** T E T

Theres a big difference in the derivation trees, however. The derivation tree for a right recursion looks like this (figure 3):
E T ** E T ** E T ** E T

Here, the leftmost T object (the first one in the sentence T**T**T**T) is at the top of the tree, while the last one is at the bottom.

Fig. 3. Right-recursive tree

Ambiguity
A grammar is said to be ambiguous if there exists two distinct derivation trees for some sentence in

Chapter 4: Context-free Grammars, page 100

the grammar. Put another way, if we can find some sentence in the grammar which has two distinct leftmost (or rightmost) derivations, then the grammar is said to be ambiguous. Heres an example of an ambiguous grammar:
G S S S S if E S if E S else S a

Now consider these two derivations, both of which are leftmost:


G S if E S if E if E S else S if E if E a else a G S if E S else S if E if E S else S if E if E a else a

They clearly generate the same sentence. Both are leftmost. But the derivations are different. So are the derivation trees (figure 5): Its clearly not good to find an ambiguity in a grammar. When there are S S two (or maybe more) possible derivations for the same sentence, there will usually if E S if E S else S be two or more possible meanings that can be placed on the sentence. if E S else S if E S In this example, which is clearly a drawn from the C programming language, the question comes down to a a a whether the second a belongs to the Figure 5. Ambiguity illustrated first or the second if. The E is some expression that evaluates to a true or false. Suppose the first E is false. In the first case, neither of the a branches is executed, while in the second case, the second a branch is executed. In fact, this is a real ambiguity in the C and Pascal syntax. It happens that the first interpretation in figure 5 is the one chosen silently by compilers. This ambiguity can be fixed through a disambiguating rule, which is that the else clause is attached to the nearest preceding if. Thus the first interpretation is arbitrarily chosen, and the second is arbitrarily rejected. Disambiguating rules in Qparser are discussed in chapter 12.
G G

Detection of an Ambiguity
How can we tell if a context-free grammar is unambiguous or not? It happens that this question is undecidable in general. There is no algorithm that can accept an arbitrary context-free grammar and decide whether or not it is unambiguous. Thats not the same as discovering a distinct pair of derivations for the same sentence, which would prove that the grammar is ambiguous. However, we can try to construct a parser for the grammar, following a theoretically correct parser construction algorithm. That algorithm, when applied to some grammar, can determine that it is unambiguous by discovering that every parse step can be made unambiguously. By this we mean that none of the parsing steps ever leaves a parsing decision open. Weve seen how this can be done for a

Chapter 4: Context-free Grammars, page 101

top-down parser described by a syntax diagram. If the parsing algorithm discovers a parsing step that cant be made unambiguously (for example, a first set conflict in a syntax diagram), it cannot announce that the grammar is ambiguous, only that it has failed to generate a deterministic parser. Some parsing step requires a decision that cant be made deterministically under the parsers rules. This may seem like a fine point, but it is an important one. There are many classes of parsers, of which the LL(k) (top-down) and LR(k) (bottom-up) parsers are two examples. These have different powers of discrimination among grammars. In general, if m > n, then an LR(m) parser can accept every grammar that an LR(n) parser can accept, plus some that the LR(n) parser cant accept. (By accept, we mean that we can construct a deterministic parser from the grammar). Furthermore, there are grammars that can be parsed with a LR(k) parser, but not an LL(k) parser, so the LR(k) method is more powerful in that sense. LR(k) is less powerful in another sensean LR(k) parser can only announce a parsing decision after a production rule has been applied, whereas an LL(k) parser (if it exists) can announce a parsing decision before the production rule is fully applied. In practice, k=1 is sufficient for all the common programming languages. Those languages for which k=1 isnt sufficient usually dont yield to a larger k anyway. They are most likely to have some innocent-looking feature that requires an unlimited number of token lookaheads in order to resolve some step. The FORTRAN language doesnt yield to an LR(k) parser for any k. It requires a special parsing algorithm designed especially for the language.

Extending a Grammar
Grammar G0 represents a very primitive algebraic language. However, it can be extended to cover almost any modern programming language, including one of your own design, by adding more rules. Here are some guidelines: You need to remember that nonterminals are cheap. Invent them as you need them. Decide in your mind just what each new nonterminal is supposed to represent, then write one or more production rules to represent that concept. Don't try to make really complicated things with one nonterminal. You've seen how to describe a sequence of things, a choice of things and a list of things using production rules. Stick to those ideas and life will be much easier. Don't describe the same language fragment two different ways. You'll very likely introduce an ambiguity. Find a structured way to describe your concepts once and once only. Make small additions to your grammar, and test each one with a parser-generator tool such as yacc or nlr1. It will tell you promptly whether you've created an ambiguity. Writing out a large grammar, then trying to track down the ambiguities will be very difficult. Study worked-out examples. A complete Pascal grammar that is unambiguous is given in appendix 4, also in the directory pascal5. Feel free to borrow the patterns developed there for your own language Don't trust a BNF description given in a language reference manual, unless the author certifies that it is "LR(1)". Many BNF descriptions are just that--designed to help someone write a program, but not to serve as the basis for a parser.

Expansion to Statements
Lets try expanding G0 to provide for sequences of statement forms in Pascal. A statement can be regarded as some (legal) sequence of tokens terminated by a semicolon. For Chapter 4: Context-free Grammars, page 102

example, each of these is a statement:


k:= k+1; if x<y then p1(x, 6) else p1(y, 6); while k<6 do begin x:=2*k; k:=k-1; end;

Notice that the while statement above contains this sub-statement:


begin x:=2*k; k:=k-1; end;

This, in turn, contains these sub-statements:


x:=2*k; k:=k-1;

With this in mind, lets first see how to expand g0 to provide for a sequence of statements. We use the list-of ideas described earlier to do this, using left recursion. Heres what the grammar would look like; well call it gA:
G StmtList StmtList StmtList Stmt ; StmtList Stmt ; Stmt id := E E E + T E E T E T T T * F T T / F T F F ( E ) F id F num

Notice that the production rules starting with


E E + T

are the same as those in g0, expanded to include a - and / operator, as well as a number num. The two stmtList production rules provide for a sequence of Stmt ; forms. Since they are leftrecursive, they will be evaluated from left to right.

More Statement Forms


We can now easily add several more statement forms, for example, a while-do form, a begin-end form, and more. All we have to do is add these production rules to our set:
Stmt Stmt Stmt Stmt Stmt while E do Stmt begin StmtList end for id := E to E do Stmt if E then Stmt if E then Stmt else Stmt

Procedure and Function Calls


In Pascal, a procedure is a function call that returns nothing. It appears in a source Pascal program as a statement, like this:
Stmt id ( funcParms ) Stmt id

Chapter 4: Context-free Grammars, page 103

If there are no parameters to a procedure call, we dont write (), we just omit them. (I like the C convention better, personally, but there are also good reasons for requiring () for a C function call with no parameters). Then funcParms needs to expand into a list of expressions, separated by commas. We know how to write that:
funcParms funcParms , E funcParms E

These four production rules now fully convey the syntax of a procedure call. What about function calls? In Pascal, functions return some value, and must be used as part of an expression. In fact, it plays about the same role as the id and num parameters found in the F rules. So we need two F rules like this:
F id ( funcParms ) F id

Unfortunately, we now have two rules


F id

and that is ambiguous! One of these is supposed to represent a simple named variable, while the other one represents a function call. That can only be resolved later by looking at the types associated with these identifiers. Aside from that, we now have an expanded language that includes function and procedure calls.

Arrays and Pointers


We havent introduced any language declarations yet, but we can anticipate that by introducing a mechanism for expressions that include array dereferencing, pointer dereferencing, and record structure dereferencing. The new operators required are these:
. record structure dereferencing [ E ] array indexing ^ pointer dereferencing

The pointer operator follows the pointer name, unlike in C, in which the pointer operator * precedes the pointer. Since we have a combination of these, we need some more recursive rules to express all this. They all belong in the F production rules, like this:
F F F F F F F F ^ F [ E ] F . id id id ( funcParms ) ( E ) num

Because of the left recursion, the new operators are evaluated from left to right. We can also form compound forms, like this example:
addrbook(i,j)^.addr[k-6]

This example can be interpreted like this:


addrbook(i,j) ^ .addr [k-6] call the function addrbook with parameters i, j function returns a pointer dereferenced pointer yields a record position the record field is an array; use index k-6

Chapter 4: Context-free Grammars, page 104

Declarations
A declaration is a high-level language form that defines types and variables. Variables are like the id used above, i.e. names that refer to some section of memory, or a file, etc. needed by the program at runtime. Typed languages like C and Pascal require that all variables be associated with some type. These languages also provide a mechanism for declaring new types and compound types; these are the language declaration forms. Functions and procedures must also be declared, either as a prototype or as a full definition. Types and declarations are fairly complicated, and deserve their own chapter, given later. We will here introduce a few production rules for some simple declarations. Lets start by defining a declaration as a form of Stmt. (This works well for C++, but isnt quite right for Pascal. Pascal requires its declarations to be in a separate section, preceding all the statements, rather than mixed in with the statements.)
Decl VAR id : Type Type INTEGER Type REAL Type BOOLEAN Type ^ Type Type ARRAY [ E ] OF Type Type RECORD FieldList END FieldList FieldList ; Decl FieldList Decl

To see how these work out in practice, notice that: A declaration starts with VAR, is followed by a single declared object, then ends in a semicolon. (The terminating semicolon comes from the StmtList production rules). Each Decl is an identifier followed by a colon followed by a Type Each Type form can be any of the alternatives shown in the grammar. For example, just INTEGER or REAL. It can also be an ARRAY form, where the E is the array dimension, and should be a constant. The RECORD form has its own internal structure, which is essentially a list of Decl forms separated by semicolons. The END token is essential; it fixes the end of this RECORD type structure.

Procedure and Function Declarations


We can easily write declaration production rules for a procedure declaration, using some of the tools given above. Heres what that looks like:
ProcType id ( ProcParms ) ; Stmt ProcType id ( ProcParms ) : Type ; Stmt ProcParms ProcParms , ProcParm ProcParms ProcParm ProcParm id : Type ProcParm VAR id : Type

The astute student will wonder why a semicolon appears in the ProcType production rules there isnt one such in C. This is a Pascal variation. Note that the Stmt syntax can be used in this. A user will typically expand this Stmt with a BEGIN ... END form, although for a single statement, that isnt necessary.

Chapter 4: Context-free Grammars, page 105

The second ProcType rule is for a function it returns the type Type. The VAR in a ProcParm states that this parameter is passed by reference, i.e. an address of the variable is passed. Its much like the & operator as used in C++ function declarations. The question now is what to do with this ProcType nonterminal. How do we attach it into our language? To do this properly, we need to revisit the top level production rules for this language. Pascal essentially requires a PROGRAM token followed by various simple declarations, followed by procedure and function declarations, followed by a BEGIN ... END body that represents the main program. (See the Pascal example program in chapter 1). Heres what that looks like:
G PROGRAM DeclList ProcTypeList BEGIN StmtList END . DeclList DeclList ; Decl DeclList Decl ProcTypeList ProcTypeList ; ProcType ProcType ProcType

Notice the . at the end of the PROGRAM production rule. This set of grammar rules has certain limitations, which are not obvious on first inspection, compared to an industrial-strength Pascal grammar: You have to have at least one declaration and one procedure declaration. These cant be empty as written. You have to have at least one statement in each StmtList. Pascal expects a (...) form just after PROGRAM Procedures and functions can be nested. We havent provided for the Pascal TYPE declaration, i.e. a way for a user to declare new types Theres a way to declare procedures and functions as a prototype, which we havent provided here. There may be some ambiguities that will appear if this whole grammar is collected together.

Complete Pascal Grammar


Appendix 4 contains a complete Pascal grammar. There are also associated syntax diagrams (see chapter 8), and a brief discussion of the languages semantics. This grammar is not ambiguous, and has a deterministic bottom-up LALR parser. The associated semantics have also been worked out as part of the Qparser translation tools.

Grammar Validity
A grammar must satisfy a few basic rules in order to be at all useful as a language definition: (completion) For every nonterminal X found among the right members of the production rules, there must be at least one rule X in the grammar. (reachability) For every nonterminal X, there must exist at least one derivation from the goal symbol G that contains X. (useful nonterminals) Every nonterminal X should be capable of deriving at least one terminal string. The significance of the completion rule is that we dont want our grammar to derive some sentential

Chapter 4: Context-free Grammars, page 106

form containing X, and then discover that there are no production rules X something that we can use to continue the derivation. The significance of the reachability rule is that we dont want a grammar with useless nonterminals, that is, nonterminals that cannot possibly appear in any derivation from the goal nonterminal. Theres no real harm in this, but the useless production rules can be confusing to a programmer, suggesting that features exist that really dont. The significance of the useful rule is that its quite easy to write a grammar in which some nonterminals can never generate a terminal string. All this requires is a recursion in which a nonterminal X always generates something with another X in it! We can easily design automated tests for these two properties, as follows:

Completion Algorithm
1. Scan all the production rules, and form a set S of all nonterminals found among the right members. 2. Scan all the production rules, and consider each left member X. If X is not in S, then the grammar is incomplete. As a practical matter, the Qparser grammar system does not perform this test. Instead, it collects all the symbols found as the left members of the production rules, and considers those as the set of nonterminals. Anything left over among the right members are considered terminal tokens.

Reachability Algorithm
1. Form a set S of nonterminals, initially empty. 2. Place the goal token G in S. 3. For every nonterminal X in S, and every production rule X , place all the nonterminals found in into S. Repeat until theres no further growth in S. 4. If there exists a nonterminal Y in the grammar that is not in S, then Y is useless. It and all its production rules can be removed from the grammar. The key to this algorithm is that a reachable nonterminal must be derivable from the goal symbol G. That is essentially what steps 2 and 3 do in this algorithm. This test is made in the Qparser system reachability cant be decided in any other way.

Useful Nonterminals
Every nonterminal must be capable of deriving at least one terminal string. An algorithm for verifying this is as follows: 1. Mark every nonterminal as useless, i.e. unable to derive a terminal string. 2. Find all the production rules in which the right member is wholly terminal. This includes empty productions. Mark the nonterminal associated with them as useful. There must be at least one of these, otherwise the grammar is not useful every nonterminal will derive a form with another nonterminal in it. 3. For all the nonterminals marked useless, test their production rule members for usefulness. This requires walking through each right member. Move through each useful terminal or nonterminal, but stop on a useless nonterminal. If one right member can be walked through completely, mark its associated nonterminal useful. 4. Repeat step 3 until no further changes in the marks can be made. 5. Any nonterminal marked useless will indeed be useless. This test is made by the Qparser system. Chapter 4: Context-free Grammars, page 107

Top-down and Bottom-up Parsing


Lets consider how a parser must operate, given a production-rule grammar. There are two general approaches to parsing: top-down, and bottom-up. These terms refer to how the derivation tree is effectively constructed. Of course, a parser program will merely generate some ordered sequence of production rules, but we can think of these as constructing a derivation tree. Indeed, well see that its quite easy to make either parser generate a tree data structure as an intermediate compiler or interpreter form. A top-down parse starts at the goal symbol, and builds a derivation tree top-down, left-to-right. This is commonly done through a leftmost derivation, following the derivation in a forward direction. Top-down leftmost parsing causes the input sentence to be scanned from left to right. The general parsing problem is: given a nonterminal symbol, and some input text yet to be scanned, which production should be chosen among several possible ones? This decision is made by looking at the next input token (or the next k input tokens).
G E E + T T F ( E ) T F id scanned part of input sentence id * id yet to be constructed Root of tree to be constructed next

unscanned part of input sentence

Fig. 6. Top-down parsing

In figure 6, the parser has completed a portion of this derivation, using our simple G0 expression grammar:
G E E+T T+T F+T (E)+T (T)+T (F)+T (id)+T

Notice that by constructing the tree from top down by following its left branches first, we also generate a leftmost derivation. Of course, we also have a sentential form in each step of the derivation. At any one step in the derivation, the front portion of the sentential form has been scanned and built into some set of production rules. We call this the closed portion of the sentential form. The unscanned part (that which follows the open portion) is called the open portion. More formally, let = x be a left-sentential form (a sentential form produced in a leftmost derivation) in some grammar G, where x is a terminal string, and either begins with a nonterminal or is

Chapter 4: Context-free Grammars, page 108

(empty). Then x is the closed portion and is the open portion. We say that the boundary between x and is the border of the sentential form x. We can describe this through a simple algorithm: 1. Given a left-sentential form, scan it from left to right. 2. Stop on the first nonterminal, or at the forms end. Note that the first such nonterminal is the one to be expanded on the next derivation step. 3. The portion up to, but not including, the first nonterminal is the closed portion. The remainder is the open portion. Heres an example: consider the derivation given above. Its rewritten below, and run to completion. The closed portion of each sentential form is underlined. Where there is no underlining, the closed portion is empty:
G E E+T T+T F+T (E)+T (T)+T (F)+T (id)+T (id)+T*F (id)+F*F (id)+id*F (id)+id*id

When the closed portion covers the whole sentence, the parsing is complete. In that step the open portion is empty. Notice that the closed portion exactly matches those tokens in the final sentence, while the open portion contains some nonterminals requiring expansion, and therefore cannot match what remains in the final sentence. We can now more precisely state the top-down parsing problem: 1. Given a left-sentential form, locate its leftmost nonterminal, which is the symbol just past the border. 2. If the border is at the end of the form, stop; the parsing is complete. 3. Otherwise, expand the leftmost nonterminal N by choosing an appropriate production rule N from the grammar. 4. Go to step 1. The key operation here is clearly choose an appropriate production rule. How can that be done? By examining the tokens in the final sentence just past the closed portion of the sentential form. Let us divide the sentence into two portions the leftmost portion matches the closed portion of some sentential form, while the rightmost portion, which well call the unscanned portion, is whats left over. At each parsing step, we then clearly have the tokens in the unscanned portion of the sentence as a guide to choosing an appropriate production rule. Notice that whatever rule we choose, its right member must ultimately cover some (or none) of the leftmost tokens in the unscanned portion. In general, all the possible unscanned portions comprises a huge set, in fact, an infinite set. In practice, we focus on just the first k tokens that could exist in the unscanned portion, where k is a small number. If our choice can always be made by examining no more than k tokens, we say that the grammar is LL(k). If we can make the choice in every possible case -- by examining just the first token of the unscanned portion, then we way the grammar is LL(1). If the next two tokens must be consulted, the grammar is LL(2). We will examine this parsing approach in more detail in chapters 7 and 8. Well also show there that a so-called recursive-descent parser can be constructed by assigning a function to each nonterminal. Calling a function is supposed to make a production rule decision from those rules whose nonterminal corresponds to the function. That is followed by scanning through the

Chapter 4: Context-free Grammars, page 109

right members of the production rule.

Bottom-Up Parsing
In a bottom-up parser, the derivation tree is built bottom-up, from the sentence up to the goal symbol. This is done through a right-most derivation, following the derivation in reverse. A bottom-up right-most derivation will scan the input sentence from left to right. The general parsing problem is: given a partially complete tree, where should the next production rule be attached, and which rule should it be? Look at the partially constructed tree below (figure 7). Two subtrees have been constructed. The first has the root E and expands into the terminal string ( id ). The second subtree has the root T, but expands into the terminal string id. It isnt at all obvious what to do next. Should the E or the T tree be linked into a production? Should the parser try to link the * token into a production rule? Or should the parser just skip over the * token and work on a subtree based on the third id token? The tree that's partially constructed in figure 7 in fact corresponds to a rightmost derivation, working in reverse. Here's the tail end of the derivation for the tree in figure 7, as a rightmost derivation.
G ... E+T*id E+F*id E+id*id T+id*id F+id*id (E)+id*id (T)+id*id (F)+id*id (id)+id*id

We've underlined the two tokens that have not yet been scanned by the tree building. The machine needs to somehow decide what is the derivation step that precedes this step:
??? E+T*id

This appears to be an impossible situation. Nevertheless, it is possible to design a parsing machine that can make that decision correctly in every circumstance, for a certain class of grammars. We call such a parsing machine an LR(k) parserL for left-to-right, R for right-most, and k (a small integer, usually 1) for the number of tokens that must be examined next in the input stream in order to make the decision. As with top-down parsing, we can frame certain definitions regarding each sentential form, as follows: Given a sentential form x in a rightmost reverse derivation, where x is terminal (and therefore matches the tail of the input sentence), and also that the next parsing step calls for applying the production rule X , we say that: x is the open portion of the sentential form (yet to be scanned), there is no closed portion of a sentential form, since and is subject to further processing and reduction, however, we say that is the viable prefix of this form, and that is its handle. The handle will clearly have to match the right member of some production rule (there may be more than one), and one of those rules will be invoked in the next step of the parsing. will be replaced by a nonterminal X, where X is a production rule. The tokens will essentially remain in place in that step, but will eventually be used in a later parsing step. In this, any one of the three components, , , or x can be empty. will be empty if we are working on the front end of the sentential form. will be empty if the next production rule is an empty production. x will be empty if we are working on the tail of the sentence, having scanned all the input tokens, or if the sentence itself is empty. Chapter 4: Context-free Grammars, page 110

There are obviously several open questions here where is the end of the viable prefix? where is the boundary between and , that is to say, what is the handle ? what production rule should be chosen to match ? (there may be more than one) As with top-down parsing, we are entitled to examine some of the tokens at the head of the open portion x. If we need to examine no more than k tokens at the head of string x in order to make a parsing decision, then we say that the grammar is LR(k). We will follow up on this in chapter 9.

G yet to be constructed

T yet to be constructed

id

id

( E ) T F id

unscanned part of input sentence

Figure 7. Bottom-up Parsing

scanned part of input sentence

Getting Started with a Bottom-up Parser


Although we have not yet developed the parsing theory for the LR(k) parser, we can start using one in a computer laboratory. Heres how 1. Follow the lab guidelines in copying certain directories from a software repository for Qparser to your local account, and using your particular platform. (Installation details can be found in appendix 5 or 6). 2. Write a grammar file with a text editor. (An example is given below.) Give it some name, such as gram1.grm. (note the grm suffix). 3. Generate a parser by executing
make GRM=gram1 ...in Unix, OR

Chapter 4: Context-free Grammars, page 111

nmake GRM=gram1

...in a Windows command prompt

4. If successful (there may be some conflicts or syntax errors in your grammar), your parser program will be called gram1 (Unix) or gram1.exe (Windows) 5. Try out the parser like this. You may want to write a text file (called input) that contains a sentence to be parsed. Your sentence may contain C-style comments, which will be ignored. It can also contain line feeds or spaces, which are also ignored.
gram1 input

6. You wont see much of anything unless your sentence contains a syntax error, in which case, the parser will complain about the error. Try inserting some errors and youll see how that works.

Writing a Grammar
Heres a sample grammar that you can copy to get started. There are several rules about the form of the grammar that you must follow, which well summarize below.
// G0.grm lexfile= ../lib/c.lex classdefs= { Ctoken: EOF Identifier Real Integer String } globals= { #include "eval.h" } // end of globals G E E T T F F F F F : : : : : : : : : : E EOF // grammar G0 E + T T T * F F ( E ) Identifier Real Integer String

Grammar Syntax Rules


You can use // C++-style comments in your grammar. Copy the first 9 lines exactly as shown. If curious, you can find more details about this in chapter 10. The production rules are written using a colon : instead of an arrow . A production rule may span more than one line, but only if the subsequent lines are indented by one space from the left margin. Start each production rule on a fresh line at the leftmost column of the line. Separate the tokens (terminals and nonterminals) by one or more spaces. This is an easy rule to overlook, and will cause lots of grief if you dont follow it carefully. The tokens named Identifier, Real, Integer and String stand for some compound lexical tokens when you run your completed parser, as follows: a. An Identifier is a C-style identifier, starting with a letter and continuing with letters and digits.

Chapter 4: Context-free Grammars, page 112

b. A Real is an unsigned C-style floating-point number, such as 15.6 or 5E-7. c. An Integer is an unsigned C-style integer, such as 567, 0777, 0x45. Note that a number that starts with 0 is considered to be octal or hexadecimal. d. A String is a C-style quoted string, such as my quoted string.

Repairing Errors
If theres a syntax error or some other problem with your grammar file, more details will be found at the top of the file
gram1.rpt

This has the same name as your grammar, except with the suffix .rpt. Check with your instructor regarding errors that you dont understand. There are several different levels of error that may be reported by this generator system, and you may have encountered one of them. For example, its possible that your grammar is ambiguous, in which case, you will see many lines of rather cryptic error complaints.

Run your Parser


If youve successfully constructed a parser as described above, you can try it out with an input sentence typed in at the console, like this:
g0 >> k+16.0*i+j* // more on the next line >> (45+6) >> control-D or control-Z

If you make a syntax error, itll be reported after each line feed. No parsing occurs until youve submitted a whole line. This also means that you can back up while typing a line to correct something. Your input sentence must be terminated with an end of file, or EOF. (Thats why EOF appears in your grammar). In Unix, type control-D to introduce an end-of-file during an interactive input session. In Windows, type control-Z to introduce an EOF. Nothing will be reported unless your sentence has a syntax error. You can also write a file containing your sentence (leave out the >> characters, of course), then call g0 with the file name, like this:
g0 filename

Seeing the Derivation Steps


This parser uses a bottom-up rightmost derivation process. You will surely want to view the derivation steps. Thats easy just call g0 with the parameter d1, like this:
g0 d1

Now you will see something like this after youve typed in the first line. Press ENTER after each display, on the prompt ...Enter to continue.
>> k+16.0*i+j* // more on the next line --> REDUCE state 1 on F: Identifier 1 11 WhiteSpace 0 1 Identifier IDENT Identifier ...Enter to continue --> REDUCE state 5 on T: F

Chapter 4: Context-free Grammars, page 113

1 11 WhiteSpace 0 5 F IDENT Identifier k ...Enter to continue --> REDUCE state 6 on E: T 1 11 WhiteSpace 0 6 T IDENT Identifier k ...Enter to continue --> REDUCE state 3 on F: Real 3 11 WhiteSpace 2 13 E IDENT Identifier k 1 16 + 0 3 Real DOUBLE Real 16.000000 ...Enter to continue --> REDUCE state 5 on T: F 3 11 WhiteSpace 2 13 E IDENT Identifier k 1 16 + 0 5 F DOUBLE Real 16.000000 ...Enter to continue --> REDUCE state 1 on F: Identifier 5 11 WhiteSpace 4 13 E IDENT Identifier k 3 16 + 2 18 T DOUBLE Real 16.000000 1 17 * 0 1 Identifier IDENT Identifier i ...Enter to continue

What you are seeing is the rightmost derivation in reverse. Each REDUCE line spells out the production rule that is involved in the derivation step. You can also see the identifier name or number value in the stack listing that follows the REDUCE line. You can also use the option D1 instead. This will print out all the steps without waiting for a prompt. Most terminals now support scrolling over many lines, so you can view it all by scrolling back. You can also save the output to a file with the file redirection command, like this:
g0 D1 inputfile > outputfile

We recommend writing an input file with your input sentence, rather than typing it into a command prompt, when using a redirected output.

Getting the Production Rules to Speak


You are no doubt wondering how to get your grammar to do something useful. You can make them speak, so to speak. Heres how Attach some C++ code just after any production rule, as a semantics action. The C++ code must be enclosed in a { .... } pair. The C++ code must be a complete statement or statement list form, essentially anything you would normally write between a brace pair { ... } Nested braces inside are OK. Make sure that any pairs of parentheses or brackets are matched up. You can include comments of the /* ... / style or // style. Make sure that any /* ... */ comment is completely inside the outermost { ... } pair. The code associated with a production rule will be executed during parsing when the production rule is invoked during a bottom-up, rightmost parsing in reverse. Chapter 4: Context-free Grammars, page 114

If you need to invoke some unusual function, thats OK, but you may need to include a header file for the function, and perhaps also modify the makefile so that your function will be linked into the parser. Include your header file in the globals section of the grammar like this:
globals= { #include "eval.h" #include "myheader.h" } // end of globals

You will get a warning like this:

assigning default tag F_4_ to this production

on each production rule. Just ignore these warnings. But pay attention to any errors reported. You can print the values of an Identifier, Real, Integer or String during parsing by using one of these forms:
Identifier->getStringValue() String->getStringValue() Real->getDouble() Integer->getInteger()

The first two return an ANSI string by value. The third returns a type double, and the last returns a type int. If you have syntax problems with your C++ code, they will appear in a generated C++ file called apply.cpp. You need to trace your error back to your grammar and fix it there, not in apply.cpp. To get started, try writing a few cout statements attached to grammar g0.grm. Heres an example that you can imitate:
// G0.grm lexfile= ..\lib\c.lex classdefs= { Ctoken: EOF Identifier Real Integer String } globals= { #include "eval.h" } // end of globals G : E EOF // grammar G0 E : E + T { cout << " plus" << endl; } E : T T : T * F { cout << " mpy" << endl; } T : F F : ( E ) F : Identifier { cout << " " << Identifier->getStringValue() << endl; } F : Real { cout << " " << Real->getDouble() << endl; }

Chapter 4: Context-free Grammars, page 115

F : Integer { cout << " " << Integer->getInteger() << endl; } F : String { cout << " '" << String->getStringValue() << "'" << endl; }

Build this as explained above, like this:


make GRM=g0 nmake GRM=g0 ...Unix, or... ...DOS/Windows

A Trial Run
When you execute your parser, you will get the usual prompt. Heres a sample run under Windows.
D:\qparser\vs18\calc>g0 >> a15*16.3+b*(23+66) a15 16.3 mpy b 23 66 plus mpy >> ^Z this is control-Z, not ^ followed by Z plus D:\qparser\vs18\calc>

Notice that you need to enter an end-of-file (the control-Z) in order to get the last plus out.

Postfix
You will notice that the operators appear in postfix order first the operands are printed, then the operators, so that
a15*16.3

comes out as
a15 16.3 mpy

Also, the very last plus printed is the + operator between 16.3 and b in the expression. We invite you to explore why this is the case by studying how the derivation tree is built and when the associated production rules are invoked in this rightmost reverse-order parsing.

Chapter 4: Context-free Grammars, page 116

Chapter 5: Expression Semantics


W. A. Barrett, San Jose State University nch5.doc, vs 4.1

Introduction
Weve examined production rules, grammars, parsing, ambiguity, etc. in the previous chapter. In this chapter, we examine how an expression grammar can be used to specify the order of evaluation of common binary operations. We introduce the concept of an abstract syntax tree, which has a close relationship to the derivation tree introduced in the last chapter.

Expression Grammar
Well be using grammar g0, with some simple extensions, for much of this chapter:
G E E T T F F E E + T T T * F F ( E ) id

Derivation Tree
Recall that a derivation can be displayed as a tree. An example is given in figure 1, using grammar
G E E T F ( E T F id This tree spells: ( id ) + id * id Fig. 1. Derivation tree for a sentence ) + T F id T * F id leaf node internal node root node

g0.

Chapter 5: Expression Semantics, page 117

The G at the top of this tree is the trees root. It will always be the goal nonterminal of the grammar. The derivation order is downward in general. For example, G derives E through the production rule G E, so we draw a line down from G to an E. At the next level, E derives E+T, so we draw three lines down to an E, a +, and a T. Whenever a nonterminal symbol appears in the tree, it should always be expanded by choosing some production rule. Therefore the symbols that appear inside the tree are always nonterminals. We call these internal nodes. When a terminal symbol appears in the tree, it cant be expanded. It therefore becomes a lower terminus node, which we call a leaf node. The derived sentence can be read from the tree by tracing the leaf nodes from left to right. We also see that the order (leftmost or rightmost) in which a derivation is performed is not implied in the derivation tree. The final tree will be the same whether the left nonterminal or the right nonterminal (or any other, for that matter) is expanded in any particular step.

Left Recursion
Recall that a production rule like E E + T is said to be left-recursive. Its left member is the leftmost symbol in the rules' right member. The derivation tree of a left-recursive rule looks like this E (figure 2):
E + T

Notice how the E nodes form a chain down the left side of the tree, while the T objects (which may be E + T subtrees rather than terminals) decorate the right children of that chain. Also notice that the leftmost T E + T object (the first one in the sentence) is at the bottom of the tree, and the rightmost T is at the top of the tree. T Fig. 2. Left recursive tree A sequence of production derivations can also yield a left recursion. In general, a nonterminal X is said to be left-recursive if there exists a derivation of the form X * X in the grammar. Thus X might derive a Y... , which derives a Z... , which eventually derives a string X...

Right Recursion
Recall that a nonterminal X is said to be right recursive if X can derive the form X. A rightrecursive production rule has the form XX For example, consider the production rules
E T ** E E T

Just as in the left-recursive case, we can obtain sequences of T objects separated by ** objects. In fact, any sentence obtained by these right recursive rules can also be obtained by the left recursive rules
E E ** T E T

Theres a big difference in the derivation trees, however. The derivation tree for a right recursion looks Chapter 5, Expression Semantics, page 118

like this (figure 3):


E T ** E T ** E T ** E T

Here, the leftmost T object (the first one in the sentence T**T**T**T) is at the top of the tree, while the last one is at the bottom.

Fig. 3. Right-recursive

Operator Evaluation in a Derivation Tree


Lets take another look at the derivation trees in figures 2 and 3. Wed like to use them to evaluate the algebraic expression that they represent. The tree in figure 2 represents the expression
T + T + T + T

As an element of a computer language like C or Pascal, the rules are that the + operators are supposed to be done from left to right, i.e. the + operator is said to associate from left-to-right. Suppose that the T objects are the variables A, B, C, D, respectively. Then the expression will be
A + B + C + D

and its evaluation should proceed like this, in this order:


value= A + B; value= value + C; value= value + D;

The order doesnt matter with addition operators, since they commute. However, if we change the order, there could be the possibility of an overflow in one ordering, but not in another. Also, if the operators were all subtractions instead, the order is clearly very imporant:
3 - 4 - 5

is not the same as


5 - 4 - 3

Now look at the derivation tree of figure 2 (left recursive). Assume that we wish to associate a value with each of the + operators. The structure of this tree dictates an evaluation order, through the obvious

Each of a operators subtrees must be evaluated before the operation can be performed. rule

This requires that the bottom-most portions of the tree must be evaluated first, which is to say, the leftmost operator. So the left-recursive tree for an expression like
A + B + C + D

will in fact force the (correct) left-to-right evaluation of the operators: A+B, then +C, then +D. An operator like + is said to be left-to-right-associative. Compare this with the evaluation order of the derivation tree in figure 3 (right recursive). Here, again, the bottom-most parts of the tree must be evaluated first. But for the expression

Chapter 5: Expression Semantics, page 119

A ** B ** C ** D

the first evaluation would be C**D, the second would be B**(C**D), etc. So an evaluation of a tree built with right-recursive production rules would enforce a right-to-left associativity for the rules operator. Right-to-left associativity is required by certain operators in certain languages. For example, the = operator in C performs assignment, and its supposed to be evaluated in a right-to-left order. So a rightrecursive production rule is appropriate for right-to-left associativity, and a left-recursive production rule for left-to-right associativity.

Operator Precedence
The notion that certain algebraic operators have precedence over others is an old one. Its found in the definition of every programming language. Thus, the multiplication and division operators have higher precedence than addition and subtraction. This means that the multiplication should be performed first in each of these expressions:
a + b*c a*b + c

despite the fact that + appears first in the first expression and * appears first in the second expression. Addition and subtraction have equal precedence, and for them, the associativity rules apply. Similarly, multiplication and division have equal precedence. Some languages have many operators. ANSI C has 15 precedence levels in all among its 46 operators. The precedence of the operators is managed by the organization of the production rules. Look at grammar G0 again:
G E E T T F F E E + T T T * F F ( E ) id

Now look at the derivation trees for the expressions a+b*c and a*b+c, given in figure 4:
G E E + T T F a T F b * F c T T * F F a b G E E + T F c

a+b*c Fig. 4. Derivation trees for two different sentences

a*b+c

Notice how the multiplication operator * appears under the addition operator + in each case. This will clearly require that * be evaluated before +. These operators acquired their tree positions through the way in which the production rules are structured. The rules Chapter 5, Expression Semantics, page 120

E E + T

and
T T * F

clearly require that any + operator will have a T right subtree, and a multiplication can only appear in a T rule.

Equal Precedence of + and Suppose we want to extend grammar G0 to include subtraction and division. Subtraction and addition have equal precedence and are left-to-right associative. The associativity rule is fixed by choosing a left-recursive production rule like this:
E E - T

This also causes equal precedence for + and -, as can be seen by considering the derivation trees of an expression like this:
a + b - c - d + e

in which the additions and subtractions will be evaluated from left to right. Equal precedence with left-to-right associativity for division is easily obtained by adding the rule
T T / F

Parenthesizing
Grammar G0 also contains a parenthesis operator through the production rule
F ( E )

Notice that this operator clearly has a higher precedence than *, since it appears in an F rule, derived from T, which derives the * operator. This means that the expression
a*(b+c)

will be evaluated by doing the stuff inside the parentheses first, i.e. b+c, then the multiplication. Look at the derivation tree in figure 1 again. Down the left branch is a (...) form, which happens to contain just a single identifier. However, since that trees root is an E, it could contain a complete expression with any operators. Its clear that any evaluation of such a tree requires evaluation of the E inside a (...) pair before any operations higher in that trees path to the root. Of course, a derivation tree does not dictate a single possible ordering of operations. For example, in figure 1, the tree under the * operator can be done either before or after the tree under the (E) form. The tree merely imposes a partial ordering on the operators, one that agrees with the usual algebraic operator precedence rules.

Adding New Binary Operators


Having just + and * is not very exciting for any language. How can we add more binary operators? Simple -- follow the pattern shown in G0. You need to decide on each of these for your new operator: 1. Does it evaluate left-to-right (like +) or right-to-left? 2. Does it have the same precedence as some other binary operator? 3. If not the same precedence, then what is its precedence relative to the other binary operators? Suppose we call the new operator @. Let's go through these questions and review the strategy: 1. If it evaluates left-to-right, we need a production rule like K K @ T. If it evaluates rightto-left, we need a production rule like K T @ K. (See figures 2 and 3) In steps 2 and 3, we'll suppose left-to-right evaluation.

Chapter 5: Expression Semantics, page 121

2. Suppose that @ has the same precedence as * in G0. Then we need to do is add the production rule T T @ F. That is, the new rule is just like the multiply rule and uses nonterminals T and F. 3. This breaks down into several different cases in general: a. Suppose that * > @ > +, meaning that @ has higher precedence than +, but lower than *. What we need is a rule that (in a sense) goes between the + and the * rule. That also requires a new nonterminal name, say K. Then we want E to derive a K, and K to derive a T, in the same way that E now derives a T, like this:
E E K K E + K K K @ T T

b. Suppose that @ < +, meaning that @ has lower precedence than +. This also requires a new nonterminal K, but we'll essentially replace E by K in some rules. Here are the changes:
E E K K E @ K K K + T T

c. Suppose that @ > *, meaning that @ has higher precedence than *. We need a new nonterminal K, but we'll use it in between T and F, like this (the other rules are the same)
T T K K T * K K K @ F F

Abstract Syntax Trees


It should be obvious from inspecting the above trees (figures 1-4) that we dont really need the whole derivation tree in order to carry out an evaluation. We can essentially remove many of the elements that are useless for evaluation, stripping down the derivation tree to its essential components with regard to its evaluation. The result is an abtract syntax tree. Which components can be collapsed or removed?

Single Production Rules


All of the single production rules can be collapsed in the tree. For example, each derivation of the form
E T XXX

in the tree can effectively be replaced by


E XXX

Chapter 5, Expression Semantics, page 122

Parenthesized Expressions
Each derivation of the form
F ( E ) ( XXX )

can clearly be replaced by


F XXX

Notice that parenthesizing is needed in infix algebraic expressions in order to determine certain precedence operation ordering. Once the derivation tree is discovered, the parentheses are no longer of any interest the tree structure itself conveys the operator ordering. This observation also clearly applies to most uses of parenthesis pairs, as well as array bracket pairs. These are needed in the high-level syntax to make the parsing work, but once a tree is constructed, they are of no longer any value.

Replacing Nonterminals with Operators


You will have noticed that most of our production rules (see the set at the end of chapter 4) are quite simple, in the sense that if an operator appears in a production rule, it appears in one and only one rule. That suggests that instead of attaching a nonterminal name to a tree node, we could attach an operator name instead.

Example
Heres an expression in gramar g0 and its companion derivation and AST:

Chapter 5: Expression Semantics, page 123

(a+b)*c+d-(e/f)

G E E - T E + T T T * F F ( E ) E + T T F a derivation tree Figure 5 -- derivation tree and AST F b c F d F ( E ) T T / F F e f a + b * c + d

/ e f

abstract syntax tree

Notice that the top part of the derivation tree corresponding to the derivation G E E T has been replaced in the abstract syntax tree by the root node - and the two children + and /. The latter nodes clearly come from the derivation steps E E T and E E + T down the left side of the tree. The derivation steps involved in F ( E ) have disappeared, leaving behind only the operator structure in the AST.

Creating New Operators


What is an operator? In a very general sense, it is some evaluation rule that will apply to an ordered set of operands. Weve seen that the simple +, -, *, / operators carry two operands each. The negation and NOT operators carry one operand each. A function call is an example of an operator that applies to the function parameters. We can also consider a statement form, such as the for statement as an operation on several operands, following this production rule:
Stmt for id := E.1 to E.2 do Stmt

This tree node will have four children, one each for the id, E.1, E.2, and Stmt, like this:

Chapter 5, Expression Semantics, page 124

for

id

E.1

E.2

Stmt

A switch statement may have a large number of operands, one for each of the branches in the switch. The possibility of many children, whose number is only known during parsing of a particular program, means that any supporting tree data structure must accommodate an arbitrary number of children for any one root. The Ctree class described in appendix 1 is such a class. Each Ctree instance represents one node in a tree, and it may have any number of children (or siblings, depending on how you look at it). It is intended to be inherited by some attribute class the attribute class then participates in the tree-building and walking functions of Ctree, and of course carries some information on each tree node.

Building an AST During Parsing


An AST can be constructed during parsing by either a top-down or bottom-up parser. Regarding bottom-up parsing, the children of a particular tree node have been fully parsed at the point at which that parent node is ready to be attached to the tree. The complete derivation tree is not required by the parser, as well see, so an AST can be constructed on the fly during parsing. For example, when the production rule
E E + T

is about to be applied during a bottom-up parse, we should have a tree attached to the second E, and another one attached to the T. We only have to attach a new root node, labelled +, whose children are these two trees. The production rule
F id

is easy to apply. The new tree root is simply an identifier node, for the id, which has no children. The production rule
F ( E )

calls for just ignoring the two parentheses, and letting E become the root node of the new tree associated with F. Regarding top-down parsing, we shall see (in later chapters) that this can be done through a tabledriven parser, or the use of recursive-descent. In recursive descent, a function is called on each tree node which is expected to parse through the child trees of that root node. To build an AST, each such function should acquire pointers to the child tree nodes from the child function calls, then use them to build a new node that represents the root node. Each function therefore has the responsibility of building subtrees through parsing operations, also of building a new root tree node, and finally of passing that root nodes pointer to its caller. Such an AST-building mechanism is built into the Qparser LR parsing system.

Chapter 5: Expression Semantics, page 125

Chapter 6: Symbol Tables


W. A. Barrett, San Jose State University nch6.doc, vs 2.0

User Identifiers
A user identifier is a user-defined name that can appear in many places in a program. Recall that the scanner can separate user identifiers from other tokens by the fact that it: starts with a letter and continues with letter, digit and underbar characters, and is not one of the reserved words of the language Thus these are all identifiers in Pascal:
k ab123_t1 user_identifier MyNameIsBill

while these are some of the reserved words in Pascal:


if while do begin end for procedure function

We've seen that an FSM scanner can distinguish identifiers from reserved words by working from a general regular expression that defines a user identifier, like this:
[a-zA-Z][a-zA-Z0-9_]*

Although this contains each of the reserved words, the FSM scanner is adjusted to favor a reserved word interpretation over an identifier's interpretation.

Roles of an Identifier
A typical program may contain hundreds of different user identifiers, and they have many different roles. In Pascal, a user identifier may take on any of these roles: The name of the PROGRAM The name of a CONST, which may have many type variations The name of a VAR, which may have many type variations The name of a TYPE The name of a PROCEDURE or FUNCTION The name of a Statement LABEL (actually, in Pascal, Statement labels are integers) The name of a RECORD FIELD The name of a File

Chapter 5, Expression Semantics, page 126

Usually the first appearance of an identifier provides a definition of the identifier. Each subsequent appearance provides a reference to the identifier.

Symbol Table
A symbol table is a one-to-one mapping of a user identifier to a set of attributes. (We use the word symbol and identifier more-or-less interchangeably). A symbol table is like a telephone book - the name of a person can be looked up and their phone number found. Here's a simple symbol table, containing the identifiers FBI, FAA, and SEC. In general, these can have different levels of attribute, as shown. The identifier FBI has a kind, type, and address attribute, while the identifier FAA has only a kind and type attribute. Name FBI FAA SEC Attributes kind= VAR, type= typePointer, address= 0485h kind= TYPE, type= typePointer kind= CONST, type= typePoint, value= 22

In other words - given an identifier, a symbol table is searched for the identifier. If found, an associated set of attributes can be found.

Using Csymtab
A symbol table lookup system can be found in lib\symtab.h, which contains the class definition for Csymtab. Here are the principal operations provided in Csymtab: const bool findSymbol(const string &key, Attr& attr) const; Look for key in the symbol table. Returns FALSE if it isnt there. Returns TRUE if it is there, and copies the attribute to parameter attr. void pushSymbol(const string &key, const Attr& attr); Enter the identifier key in the symbol table, along with a copy of attr. This will be entered whether theres one already there or not. findSymbol is a search operation. It searches the symbol table for the specified name, returning FALSE if it can't be found, and setting the attribute attr if it was found. The attribute may be any class, since Csymtab is a template. pushSymbol places a new key:attr pair in the symbol table. Both the key and the attribute are copied The push part of pushSymbol refers to the way in which the key is entered. If there's another instance of this particular name, the latest one does not replace the previous one, but rather covers it up, much like the top plate in a stack covers up the ones underneath. The last instance can later be removed, to uncover the previous ones.

Example: the GOTO Label


In Pascal and most other languages, you can mark a statement with a name, then transfer control to that statement from somewhere else in your code using a goto. Weve used goto statements in

Chapter 5: Expression Semantics, page 127

previous chapters. Here's an example of a Pascal function using marked statements and goto statements:
function myfunc(i, j: integer): integer; label m1, m2; begin if i > 0 then goto m1; i:= 15; m1: { statement marker m1 } while true do begin j:=j-1; if (j < -3) then goto m1; if (j < -4) then goto m2; end; m2: { statement marker m2 } for i:=0 to 22 do writeln('i= ', i); end;

The two markers used are m1 and m2. These must first be declared in the label declaration seen just before the first begin. The scope of the markers is the body of the function, which extends to the last end; in the last line. A statement is marked using the colon form, like this:
m1:

Any number of goto m1 or goto m2 statements may appear. In this function, there are two goto m1 statements and one goto m2 statements. This is a terrible way to write a program, especially if several markers and many goto statements are present. It's very difficult to trace the program control flow. For example, it's not clear whether the function above runs endlessly in a loop! Yet most programming languages are expected to support the goto statement, so it's instructive to see how that's done. It should be clear that we can have more than one marked statement, as long as the marker names are different. (Having two statements with the same marker name would be ambiguous). On the other hand, we can have any number of goto statements to the same marker name. If you want to use a marker for a statement, such as HERE, then HERE must first be declared, like this:
LABEL HERE;

Each such marker must be declared, and it cant be declared more than once. Also, the declaration must precede any use of the marker name. (C doesnt require a marker declaration)1. We can then mark some statement with HERE, like this:
HERE: k:= 22;

Somewhere else, control can be passed to that marked statement through a statement, like this:
GOTO HERE;

Of course, goto is a Pascal reserved word. (It can't be written with a space between go and to, for instance)..

General Pattern for goto Markers


Here's the general pattern for using an identifier as a goto marker: Jensen and Wirth Pascal uses numbers for statement markers, rather than identifiers. To avoid confusion, well use identifiers throughout this chapter. Chapter 5, Expression Semantics, page 128
1

LABEL HERE; ... GOTO HERE; ... HERE: ... ... GOTO HERE;

{ Pascal declaration of the statement marker HERE } { a reference } { another reference } { another reference }

The ". . ." parts refer to some other Pascal statements. There could be many of these. There could also be more GOTO HERE statements. Managing these rules turns out to be impossible using syntax diagrams or production rules alone. We need a separate data manager, in particular, a symbol table. Let's design symbol table attributes for a goto marker identifier.

goto Label Strategy


Here's a strategy for a statement marker, as used in Pascal. This follows the comments given above for this particular kind of user identifier. Well need an attribute class, which well call Csymbol. It will look like this:
class Csymbol { bool referenced, defined; symKind skind; // symKind is an enumerated type public: Csymbol() : referenced(false), defined(false), skind(sLABEL) {} };

We need three attributes attached to each marker symbol, as follows: enum Csymbol::skind. This is an enumerated type that distinguishes different kinds of symbols -- procedure, function, variable, label, etc. We'll assign the value sLABEL to a marker identifier. Notice that this will be assigned when a Csymbol object is created. bool Csymbol::referenced. This Boolean is TRUE if the identifier has appeared in a GOTO reference somewhere, and is FALSE otherwise. Its initially FALSE. bool Csymbol::defined. This Boolean is TRUE if the identifier has appeared as a statement marker somewhere, and is FALSE otherwise. Its initially FALSE. Our strategy can now be summarized in the following table: Pascal
LABEL HERE;

Operation This is a first appearance of the identifier HERE. If the string HERE is already in symbol table, declare an ERROR. Create a new symbol table entry for the string HERE by allocating a Csymbol object from the heap. Csymbol::skind = sLABEL. This marks the symbol table object as a LABEL, vs. something else. Csymbol::referenced= Csymbol::defined= false. Comments: This makes sure that HERE isn't being used for some other purpose, also that it hasn't appeared previously in a LABEL declaration. It will henceforth be "reserved" for that purpose only, until the end of the scope of this LABEL declaration. Also, these assignments occur Chapter 5: Expression Semantics, page 129

automatically when the Csymbol object is created.


GOTO HERE;

Look up HERE in the symbol table. (Should be there) If it's not there, declare an ERROR: undeclared statement marker. if (Csymbol::skind != sLABEL) declare an ERROR: HERE is not a statement marker. Csymbol::referenced= true. This means that the marker has appeared in a GOTO. generate the instruction:
JMP HERE

HERE:

stmt

Comments: This makes sure that HERE has been previously declared (it's in the symbol table) and is a valid marker identifier (doesn't have some other purpose). The referenced flag is set indicating that we've seen a GOTO with this identifier. It may also be set several times during the compilation. Look up HERE in the symbol table. If it's not there, declare an ERROR: undeclared statement marker. if (Csymbol::skind != sLABEL) declare an ERROR: HERE is not a statement marker. if (Csymbol::defined==true) declare an ERROR: HERE has appeared previously as a statement marker. Csymbol::defined= true; generate the assembler statement marker:
HERE:

(at end of LABEL scope)

Comments: This makes sure that HERE has been declared (it's in the symbol table) and is a valid marker identifier. If defined is TRUE, that means that another HERE: marker has appeared in the source, an error. We set defined to true. Scan the symbol table. For each entry such that (Csymbol::skind == sLABEL), do the following: if (Csymbol::referenced == false) generate a WARNING: "No GOTO for identifier marker". if (Csymbol::defined == false) generate an ERROR: "No target for marker"; Comments: This is a sweep of the symbol table, for this particular scope. We look for all the identifiers such that skind == sLABEL. When we find one, we see if referenced is TRUE; if not, that means no GOTO has been seen for this identifier. (A WARNING). We also see if defined is TRUE; if not, that means no marked statement has been seen for this identifier. This is a serious ERROR.

Designing a Strategy
Most symbol table strategies are similar to the above plan. The first appearance of an identifier is usually in a declaration. Until that moment, the identifier will not be found in the symbol table; upon that event, it should be entered.

Chapter 5, Expression Semantics, page 130

On subsequent appearances, the identifier might be given more attributes, or referenced in some way. In most languages, all the identifier's attributes (its type) are assigned through the first declarative appearance. Subsequent appearances are references that make use of the attributes, but otherwise likely will not change them. So we expect an identifier to be in the symbol table on a reference. The compiler will then use its attributes to decide whether the reference is legal or not, also just how to make use of the identifier in the reference. We'll look at types in another chapter. A type is a very general way of ascribing attributes to programming language identifiers.

Multiple Passes
Some compilers operate by scanning their source file more than once. We call these multiple-pass compilers. These are usually done to build up attributes for the identifiers. For example, the first pass may only find all the identifier's declarations, and fill in some attributes. A second pass uses the symbol table begun in the first pass, perhaps augmenting it with additional information. Eventually, a pass will be used to generate some or all of the instructions. One advantage of a multi-pass compiler is that the declaration of some type or variable can follow one or more references. For example, a function call can appear early in the source code, with its declaration coming later. (C and Pascal don't work this way, but some languages require this flexibility). The common element in a multi-pass compiler is that the symbol table is initially empty, then built up in stages during each of the passes, i.e. it's preserved from one pass to the next.

Performance
A typical program file will have hundreds of symbols. With several .h files, the number can go to thousands of symbols. The symbol table will get built up in size as the declarations and executables are scanned. If the lookup process is just a linear search, then the symbol lookup time goes up directly with the number of symbols n, causing an n2 program-performance complexity. We clearly need an efficient way of looking up a symbol. Any ANSI C++ compiler contains a multimap function that almost provides the operations we need [2]. It's a template, and can therefore be bound to any class as the attribute class. It's also very general in the sense that any object can be carried and searched for efficiently, provided that certain comparison operators are defined for the object. Unfortunately, its difficult to get a multimap to support a pushdown hiding principle that we find useful in a compiler. So weve written our own symbol table template, using a hash table linked-list approach described later. Rather than blindly using a general template, let's look at two ways of implementing a symbol table lookup function, the binary tree and the hash table methods. We have certain requirements of any lookup system, as follows: rapid search for a given string. the string name may be short or long, in the form of a C null-terminated string, any number of strings may be entered in a given pass, with no effective limit, two or more identical keys should be carried supported in a last-in, first-out style, Chapter 5: Expression Semantics, page 131

we'd like to be able to list the strings, sorted alphabetically, with attribute descriptions, for a symbol table listing at the end of a compiler section, and a stack of tables is needed so that the same symbol might be pushed in several times, fifo fashion. That last requirement is needed to support the different scopes of variables found a typical programming language. In particular, Pascal permits declaring the same identifier several times over, though in different functions. Our symbol table system must support that requirement. It happens that map requires unique keys, so that it would need some help by embedding it in a stack of maps.

Binary Tree Organization


Using a binary sorted tree, a symbol table node might look like the following [1]. This holds one identifier (pString). Attributes can be attached to this object, or relegated to a derived class.
class TSymtabNode { TSymtabNode *left, *right; // ptrs to left and right subtrees char *pString; // ptr to symbol string friend class TSymtab; public: TSymtabNode(const char *pStr); // sets left and right to NULL virtual ~TSymtabNode(void); // deallocates subtrees and pString void Print() const; // describe this node { attributes } };

Here, pString is a pointer to the identifier string. It must be allocated from the heap so that the destructor can deallocate it. The pointer left may be NULL or point to a TSymtabNode object whose name is alphabetically less than this name. The pointer right may be NULL or point to a TSymtabNode object whose name is alphabetically greater than this name. The symbol table is carried as a TSymtab object. TSymtab's class definition is:
class TSymtab { TSymtabNode *root; public: TSymtab() : root(0) {} virtual ~TSymtab() { delete root; } TSymtabNode *Search(const char *pString) const; TSymtabNode *Enter(const char *pString); void Print() const {root->Print();} };

Chapter 5, Expression Semantics, page 132

root is part of TSymtab root these are members of TSymbolNode left right "level" left right "barndoor" left right "creation" left right "main" left right "Norway" left right "ostrich"

left right "devices"

left right "mike"

Fig. 1. Binary sorted tree symbol table illustrated

Figure 1 illustrates typical binary symbol table constructed by these class member functions. Initially, TSymtab::root is NULL. The first name entered is "level". It will have a NULL left and right pointer. When the name "barndoor" is entered, the search starts at "level". Search decides that "barndoor" belongs to the left of "level", and thus attempts to follow the left pointer in "level". However, it's NULL, so the new TSymtabNode carrying "barndoor" is attache to the left pointer of "level". The other symbols are entered in a similar way. When "ostrich" is entered, the search starts (as always) with "level", then proceeds to the right through "Norway", then is attached to "Norway"s right pointer. The result is a binary tree. The average complexity is ln n (per search), if the tree happens to be perfectly balanced, which is much better than n.

Searching for a Name (TSymtab::Search)


The C++ code for this function does the following: Start at the symbol table tree root. Compare the wanted name W with the nodes name N. The C function strcmp can be used for this purpose. It compares two strings alphabetically. If the first one is less than the second, it returns -1. If the first one is equal to the second, it returns 0. Otherwise, it returns +1. If W < N (alphabetically), go to the left node and repeat this operation. (This can be a recursive call) If W > N, go to the right node and repeat this operation. Continue until the node is NULL or the name is found.

Chapter 5: Expression Semantics, page 133

Entering a New Name (TSymtab::Enter)


The C++ code for this function does the following: Start at the symbol table tree root. Compare the wanted name W with the nodes name N. If W < N, go to the left node and repeat. If W > N, go to the right node and repeat. Continue until the left or right node pointer P is NULL. (we assume the name isnt in the table) Create a new symbol table object S by allocation from the heap. Set pointer P to point to S. Supporting multiple identical keys in a push-down ordering, this strategy needs some help. A more recent entry needs to go into the tree at a higher position than one already in the table. This requires some shifting about of the existing nodes. Whether one goes to the left or the right is immaterial. Deleting a name from a binary tree can also be messy, since theres only one pointer to the name to be deleted, but two child pointers to be linked. For most compiler purposes, theres no need to delete names from a symbol table. Creating separate tables for new name scopes is useful, and doesnt require any extensions to this basic mechanism.

Listing the Names in Alphanumeric Order (TSymtab::print)


Printing the names in a sorted binary tree is easily done through recursive calls. Notice that we have a function in each TSymtabNode class that prints a report. This also prints an entire tree rooted in itself, so that all that TSymtab needs to do is call root->Print(). What we want is a depth-first left-to-right tree walk. We print the names and/or attributes as they are seen after the left tree is printed, but before the right tree is printed. Depth-first means if the current node has a left member, go to it first. Left-to-right means we deal with the left member before dealing with the right member. Here's what the TSymtabNode::print looks like:
void TSymtabNode::Print(void) const { if (left) left->Print(); // print this node and any attributes cout << pString << endl; if (right) right->Print(); };

The names will come out in alphabetical order, because that's the order in which the tree was built.

Balanced vs. Unbalanced Binary Trees


The reader will notice that this tree is unbalanced. In general, this will hurt performance by an unknown amount, depending on the order in which symbols are entered in the tree. If symbols occur in a program in alphabetic order (or reverse order), then the binary tree will degenerate into a linear list, with all the performance problems expected of such a list. Keeping a tree balanced in an optimal fashion can be done, but is usually not an issue for a compiler. A balancing algorithm usually involves changing the root of the tree. Notice that we must keep the alphabetic ordering. By choosing the "middle" node (halfway between the left and right nodes) as the root node at the top level, then working down through the tree in a similar way, we can achieve a rough

Chapter 5, Expression Semantics, page 134

balance. When the tree is optimally balanced, we can expect a ln n performance (per search). Otherwise, the performance may approach n per search. The higher lookup performance must be weighed against the cost of balancing. An effective balancing system is the so-called B-Tree implementation, found in [3].

Hashed Symbol Tables


Another way to organize a symbol table for compiler purposes is with a hashed symbol table.
hash table
0 name hashit maps name into index 0..7 1 2 3 4 5 6 7
null "samuel" null null "samantha"

linked list of hash-equivalent names

null
"fred"
"frank"

"mary"

"sam"

Symbol Table access through hashing algorithm


A hash algorithm is some way to map any identifier into a number, called the hash code, into the range 0..H-1. H is the size or modulus of the hash table. In the diagram above, H is 8. A simple hash algorithm is to treat the identifiers characters as numbers and add them arithmetically. The modulo H of the sum is used as the hash code of the identifier. The characters might also be combined through an XOR operation. Using OR or AND would not be wise, since an OR of an arbitrary string of 7-bit characters will tend to yield many 0x7F values. Using AND will tend to produce man 0x00 values. The ASCII characters in the name sam add to 321. Dividing 321 by 8 yields a remainder of 1, so this name is linked into index 1 of the hash table. The name frank add to 530, with a remainder of 2, so this name is linked into index 2. Since many different identifiers can map into the same hash code, different identifiers with the same hash code must be organized into a linked list. Since mary, fred and sam hash to the same code, they will be in the same linked list. sam has been entered first, then mary, and then fred. New names are effectively pushed into the linked list, treating it as a first-in, last-out pushdown stack. Hashing reduces the search time by a factor of H less than that of simple linear searching. Hashing will also be faster than a binary lookup, until the number of symbols N exceeds H*ln H. Beyond that, a binary search will be faster. What makes this attractive in a compiler is that although there's no practical limit to the number of names that can appear in a program, most functions introduce only a few names. As an embellishment, the names could be carried in a binary tree (balanced or unbalanced) in each hash list, rather than in a linear list. That would help improve compiler performance with a large number of symbols. The pushdown stack idea can be carried in a binary tree, if care is taken to preserve first-in, last-out. The names in a hash table will not be in alphanumeric order. Also, names that are similar may have very different hash codes. Thus "samuel" and "samantha" are on different hash chains.
Chapter 5: Expression Semantics, page 135

More about Csymtab


The Qparser system uses a covering template Csymtab<class Attr> which uses the hashing algorithm described above. A symbol table (class Csymtab) carries a set of symbol objects (e.g. class Csymbol). So you typically allocate one symbol table object (Csymtab), but allocate many symbol objects (Csymbol). Its up to you as the user to design your own Csymbol attribute class. You can call it anything you like. This system expects that the keys are in the form of strings. These are of course the user identifiers. As weve seen in our discussion of lexical analyzers, theres no need to carry literal tokens in a symbol table.

Include File
#include "symtab.h"

This include file and the associated libraries are in qparser\lib. The symbol table classes are in file symtab.h.

Declaring a Csymbol Class


An example of this was given above for the label marker class. This is the Attr class referred to in the Csymtab template.

Allocating a Symbol Table


This should appear among your global objects, or as part of a global class:
Csymtab<Csymbol> symtab;

What this does is allocate and initialize a symbol table. Initially it holds no symbols, i.e. it's empty. Also, it will hold a copy of any Csymbol class that you want to carry in the table. You might want to consider having it hold a pointer to a class object, in which case it would be declared like this:
Csymtab<Csymbol*> symtab;

Using a pointer requires some additional work on the part of your program, of course: Each new Csymbol object must be allocated from the heap Before you close a Csymtab, the Csymbol objects should be deallocated; more about this later.

The Csymbol Class


Heres a pattern for the Csymbol object:
class Csymbol { // any attributes, i.e. booleans, ints, etc. public: Csymbol() {} // to initialize attributes // member functions to set or access your attributes };

You will want to have a constructor with no parameters, i.e. Csymbol(). You can of course have other constructors; just make sure theres one with no parameters. Chapter 5, Expression Semantics, page 136

Let's assume that each user identifier is associated with a float or an int type (not a float or int value, just which type the identifier represents). With only two cases, a Boolean attribute is sufficient. So an appropriate symbol derived class would be this:
class Csymbol { bool isaFloat; public: Csymbol() : isaFloat(false) {} bool isFloat() { return isaFloat;} void setAttribute(bool isf) { isaFloat= isf; } };

The Boolean attribute isaFloat is set to false by default. It can be set to true through the member function setAttribute. As a matter of style, it might be better to set this attribute with a second constructor parameter, but we've chosen not to do that here.

On a Declaration
On a declaration, the user identifier should have appeared the first time. So we perform a lookup with the identifier name. It should not be in the symbol table, so we want this code:
string name; // should be initialized to some name to search for Csymbol symp; if (symtab.findSymbol(name, symp)) cerr << name << " has been previously declared" << endl;

After testing that it is NOT in the table, we'll enter it. We also set the attribute based on the declaration:
symp.setAttribute(isafloat); symtab.pushSymbol(name, symp); // set the attributes // push into symbol table

On an Appearance
On a subsequent appearance (after a declaration), the identifier should be in the symbol table. So a lookup should not fail. You can then use the attribute to decide on how that appearance should be handled, i.e.
string name; // needs to be set to something Csymbol symp; if (!symtab.findSymbol(name, symp)) { cerr << name << " has not been declared" << endl; exit(0); } bool attr= symp.isFloat(); // get the attribute

A Fully Worked-out Example


Here's a program that creates a symbol table and illustrates its use as a name directory. It asks for first-name: last-name pairs to be entered through standard input. These are saved in the symbol table by last name, with the first name as an attribute. You can then enter any last name, and it'll report the full name, or complain that it can't find the name.
// symtest2.cpp // Tests the symtab functions #include <iostream> #include <string> #include <vector>

Chapter 5: Expression Semantics, page 137

#include "assert.h" #include "symtab.h" using namespace std; class Csymbol { // any attributes, i.e. booleans, ints, etc. string firstname; public: Csymbol() {} Csymbol(const string& fname) : firstname(fname) {} const string& getFirstName() const {return firstname;} void setFirstName(const string& fname) {firstname= fname;} }; Csymtab<Csymbol> symtab; // create a symbol table

int main() { string choice, last, first; while (true) { cout << "i: insert a new name, like this: 'i first last'" << endl; cout << "r: report a name, like this 'r last'" << endl; cout << "l: report all the names" << endl; cout << "q: quit" << endl; cout << " ...your choice: "; cin >> choice; switch (choice[0]) { case 'i': { cin >> first >> last; Csymbol sym; if (!symtab.findSymbol(last, sym)) { // didn't find it sym.setFirstName(first); symtab.pushSymbol(last, sym); } else { if (sym.getFirstName() == first) cout << first << ' ' << last << " previously entered!" << endl; else { // This is a replacement of the previously entered object sym.setFirstName(first); symtab.pushSymbol(last, sym); } } } break; case 'l': { // list all the stuff in symbol table // The sort routine lists the key first, then the attribute, // i.e. last name, first name vector<pair<string,Csymbol> > items; symtab.sort(items); vector<pair<string,Csymbol> >::const_iterator vi;

Chapter 5, Expression Semantics, page 138

for (vi= items.begin(); vi != items.end(); ++vi) { cout << vi->second.getFirstName() << ' ' << vi->first << endl; } } break; case 'r': { cin >> last; Csymbol sym; if (!symtab.findSymbol(last, sym)) { // didn't find it cout << "...can't find name " << last << endl; cout << " try again" << endl; } else { cout << sym.getFirstName() << ' ' << last << endl; } } break; case 'q': return 0; default: cout << "...I don't understand '" << choice[0] << "'. Try again" << endl; } } return 0; }

This program should of course be compiled and linked to the library in lib. Here's a session running this program from an MSDOS prompt:
i: insert a new name, like this: 'i first r: report a name, like this 'r last' l: report all the names q: quit ...your choice: i bill barrett i: insert a new name, like this: 'i first r: report a name, like this 'r last' l: report all the names q: quit ...your choice: i mike wallace i: insert a new name, like this: 'i first r: report a name, like this 'r last' l: report all the names q: quit ...your choice: r barrett bill barrett i: insert a new name, like this: 'i first r: report a name, like this 'r last' l: report all the names q: quit ...your choice: l bill barrett mike wallace i: insert a new name, like this: 'i first last'

last'

last'

last'

last'

Chapter 5: Expression Semantics, page 139

r: report a name, like this 'r last' l: report all the names q: quit ...your choice: q

It clearly entered and later found the name "barrett", with its attribute "bill". It also listed both names entered; these are sorted by last name. You can enter several different names with the same last name, and they will all be entered in the symbol table. However, only the last one entered can be retrieved through findSymbol; the others are hidden. Function bool dropSymbol(const string& name) will search for the string name and unlink the first one found from the symbol table. It reports true if it found it (and dropped it) and false otherwise. The dropped name and attribute are deleted . Note that these are copies, not originals, and should therefore be deleted. If you have two identical keys in the symbol table, and drop one, the earlier one will become the one discovered by findSymbol. This push-down property is useful in block-structured languages.

The Sorting Code


This section of our example deserves a bit more discussion. A pair is an STL template struct, defined like this:
template <class A, class B> typedef struct { A first; B second; } pair;

Notice that A and B are the types of the members of the pair. The names of the pair members are first and second. The way weve chosen to implement a symbol table sorting routine is to first allocate a vector of pairs. Each pair will contain a string and a Csymbol object. The string will be the symbol name, of course. Thats this line:
vector<pair<string,Csymbol> > items;

The vector is initially empty. The next line calls the sort member function of the Csymtab class. This expects a vector of the type just declared (items). It fills the vector from the symbol table, then sorts them by the string element. (You can see how this is done by examining the template code found in lib\symtab.h).
symtab.sort(items);

Now that the vector items has been filled and initialized, we propose working through it with an iterator. Heres how that iterator is declared. Notice the extra space between the two > characters. This avoids an ambiguity with the >> token. This will be a constant iterator, since we dont propose changing anything in the items vector.
vector<pair<string,Csymbol> >::const_iterator vi;

Here is how we access each of the vector elements, one at a time, in sorted order. The iterator vi will access element 0, then element 1, etc.
for (vi= items.begin(); vi != items.end(); ++vi) { cout << vi->second.getFirstName() << ' ' << vi->first << endl; }

The line inside the for accesses the vector element (a pair) by dereferencing the iterator vi, like this:
vi->second.getFirstName()

Notice that second is the Csymbol part of the pair. By calling function getFirstName() on this element, we return the symbol table attribute, a string, representing the first name.

Chapter 5, Expression Semantics, page 140

The second dereference,


vi->first

returns a string representing the symbol table key, which is the last name in our example.

Regarding an Attribute Passed by Reference


In our example above, the symbol table attribute is a Csymbol object, passed by reference to findSymbol and pushSymbol. This was decided by the symbol table declaration
Csymtab<Csymbol> symtab; // create a symbol table

This means that when


pushSymbol(last, sym)

is called, a copy of the object is connected into the symbol table structure. Also, when
findSymbol(last, sym)

is called, and is successful, a copy of the symbol table object is returned to the object sym. This has certain consequences: If the Csymbol data is large, making those copies may degrade performance. Regarding findSymbol, any information contained in sym before the findSymbol call is lost, since its written over by this call. (sym is passed by reference) A copy constructor is required for your Csymbol attribute class. You need to decide just how the object should be copied within this context. In particular, lets say that you wish to do a symbol table lookup of some name. Getting a copy of the attribute seems OK. But suppose you want to change that attribute in some way. The change will not take effect in the symbol table until you somehow copy your changes back into the symbol object. The only way to do that is to call
replaceSymbol(last, sym);

This will perform a key lookup operation on the string last, then, if successful, will copy sym into the attribute object. You can think of this as the reverse of findSymbol. Instead of copying sym from the table, this copies the table into sym. Unfortunately, replaceSymbol does require a second key lookup operation, which may degrade performance. The wrong way to replace the attribute is to call pushSymbol. What this will do is introduce a second key with the same name. It happens that this second key is pushed on top of any previous ones, stack fashion, so that a subsequent findSymbol will return the last one pushed in. So you wont notice any immediate difference. However, both keys are still in the table, and both of them will reappear when you generate a sorted list using symtab.sort. That will introduce a bug in any strategy that depends on scanning the table at the end of a scope.

Passing the Attribute by Pointer


Its often better practice to allocate each attribute class from the heap, then pass its pointer to the symbol table instead of copying the attribute. Heres how the symbol table should be allocated in that case:
Csymtab<Csymbol*> symtab;

Now the findSymbol and pushSymbol operation looks like this:


Csymbol *sp= 0; if (symtab.findSymbol(name, sp)) { // sp now points to the attribute sp->attr1= something; // this goes into the symbol table! } else {

Chapter 5: Expression Semantics, page 141

// sp is still 0. To enter the symbol, do this: sp= new Csymbol(attributes); // create a new Csymbol symtab.pushSymbol(name, sp); // pointer is copied to table }

This makes it much easier to change the attributes in the symbol table you have a pointer to them!
sp->attr1= something;

effectively changes both our local attribute, but also the one in the symbol table (they are the same object). What this does is avoid calling replaceSymbol, which of course also avoids doing a second lookup on the same key. So its a performance gain. Unfortunately, you dont get off quite that easily. When symtab must be released, its destructor doesnt know whether to deallocate its objects or not. The assumption made in Csymtab is generally that its objects are copies of something, and if its a copy of an ordinary object, it will be deleted by itself. If its objects are pointers, maybe they should be deleted or maybe not by default this container class assumes not. If the objects pointed-to will be deleted by some other mechanism, then you need do nothing else. Otherwise, you need to call
symtab.deleteAttrs()

This will not delete the symbol table structure or the keys, but it will delete all the attributes through their pointers, and set the pointers to 0.

Symbol Table Stacks


The template Csymtab also supports a stack of symbol tables. To use this feature, each symbol table must be allocated from the heap. They will be deallocated at the end of a compiler run. The idea of a symbol table stack is that the current one is at the stack top. If you call findSymbolTop, only the top symbol table is searched for the name. If you call findSymbol, all the tables are searched in order, starting with the stack top and working down into the stack. This mechanism is also useful for supporting nested identifier scopes of the sort found in C, Pascal and other languages. When we enter a new scope, we push a new (empty) symbol table on our stack, and proceed to populate it with symbols. When we leave a scope, we pop the symbol table stack and delete the popped table, taking with it all its symbols, of course. To create a new Csymtab object suited for a stack of tables, do this:
Csymtab<Csymbol*> Csymtab<Csymbol*> *basetab= new Csymtab<Csymbol>(0); *newtab= new Csymtab<Csymbol>(basetab);

That creates a base table. Lets create another one and attach it to the base table: The parameter basetab of this constructor is a previous pointer, to our base table. We now have two symbol tables that can be searched as a single one. The function newtab->findSymbol now searches both of them, starting at the top of the stack and working down into the stack. It will fail only if the key cannot be found in either of them. The function newtab->findSymbolTop searches only the topmost table, ignoring basetab. It will fail if the key is not in the topmost symbol table newtab. Function newtab->pushSymbol will write a new symbol and attribute into the topmost table newtab. We will never have to push a new symbol into a deeper table. In this example, the symtab attribute is a pointer. You need to decide whether all the attributes carried in the symbol table should be deleted before discarding the table. If so, then call
newtab->deleteAttrs();

Chapter 5, Expression Semantics, page 142

which will call delete on each of the pointers in the table, for the topmost table only. If you want to delete all the attributes in the stack tables, call
newtab->deleteAttrs(true);

In both cases, the attribute pointers are set to NULL, but the symbol names and the access structures remain in place. These will disappear upon deleting the top-most symbol table, like this:
delete newtab;

To clear away just the topmost symbol table of a stack, do this:


newtab->deleteTop(); delete newtab;

This will unlink and clear out the topmost table, then delete the topmost table. Tables further in the stack are left unchanged, and the pointer to the previous symbol table is also returned by deleteTop(). Neither of these will delete pointer-style attributes, which may be left dangling, so you should call deleteAttrs before deleting the table.

References
[1] Ronald Mak, Writing Compilers and Interpreters, John Wiley & Sons, Inc., New York, second edition. [2] Bjarne Stroustrup, The C++ Programming Language, Addison-Wesley, Reading, MA, third edition. [3] R. Bayer and E. M. McCreight, Organization and Maintenance of Large Ordered Indexes, in Acta Informatica, vol. 1, number 3 [1972], pages 173-189. A tutorial can be found in many data structure textbooks.

Chapter 5: Expression Semantics, page 143

Chapter 7: Top Down Parsing


W. A. Barrett, San Jose State University nch7.doc, vs. 5

Top Down Parsing as Tree Building


Lets review how a top-down parser must operate, given a production-rule grammar (viz chapter 4). A top-down parse starts at the goal symbol, and builds a derivation tree top-down, left-to-right. This is commonly done through a leftmost derivation, following the derivation in a rightmost scan direction. Top-down leftmost parsing causes the input sentence to be scanned from left to right. The general parsing problem is: given a nonterminal symbol, and some input text yet to be scanned, which production rule should be chosen among several possible candidates? This decision is made by looking at the next input token (or the next k input tokens).
G E E + T T F ( E ) T F id scanned part of input sentence id * id yet to be constructed Root of tree to be constructed next

unscanned part of input sentence

Fig. 1. Top-down parsing

In figure 1, the parser has completed a portion of this derivation, using our simple G0 expression grammar:
G E E+T T+T F+T (E)+T (T)+T (F)+T (id)+T

We also have a sentential form in each step of the derivation. At any one step in the derivation, the front portion of the sentential form has been scanned and built into some set of production rules. We call this the closed portion of the sentential form. The unscanned part (that which follows the open portion) is called the open portion. More formally, let = x be a left-sentential form (a sentential form produced in a leftmost Chapter 7: Top Down Parsing, page 144

derivation) in some grammar G, where x is a terminal string, and either begins with a nonterminal or is (empty). Then x is the closed portion and is the open portion. We say that the boundary between x and is the border of the sentential form x. We can now more precisely state the top-down parsing problem as follows: 5. If the border is at the end of the form, stop; the parsing is complete. (There are no nonterminals in the sentential form). 6. Otherwise, locate the leftmost nonterminal N in the sentential form. Expand N by choosing an appropriate production rule N from the grammar, then replacing N by in the sentential form. 7. Go to step 1. The key operation here is clearly choose an appropriate production rule. How can that be done? By examining the tokens in the final sentence just past the closed portion of the sentential form. If our choice can always be made by examining no more than k tokens, we say that the grammar is LL(k). If we can make the choice in every possible case -- by examining just the first token of the unscanned portion, then we say the grammar is LL(1). If the next two tokens must be consulted, the grammar is LL(2). And so forth. We tackle the question of choosing the appropriate production rule in this chapter. For that, we need a pair of functions, the first set and the follow set.

First Sets
Let S be some set of strings in a grammar G. For now, these strings need not be derivations from the goal symbol, but they may consist of a mixture of terminals (tokens) and nonterminals. The set first(S) can be (loosely) defined as the set of terminal tokens (not strings of tokens) that are the leftmost terminal of all sentences derivable from the members of S in the grammar G. Put another way, consider each string s in S. Then expand s in all possible ways using the grammars production rules to derive sentences. (We obviously have an infinite set of sentences now, in general). Let be any such sentence. If is empty, then add to first(S). Otherwise, pick off the leftmost terminal of and add it to first(S). Notice that although we must consider an infinite number of sentences, the number of tokens that can possibly be in first(S) is finite. This suggests that there should be an algorithm that can develop first(S). Algorithms must be finite, i.e. complete in a finite number of operations. Our algorithm clearly cannot just grind out an ever-increasing list of sentences.

First sets Algorithm


Heres how to find the first set of some set of string S consisting of terminals and nonterminals: 1. Clear F, i.e. set F = , the empty set. (The empty set contains nothing, not even an empty string). 2. For each member s of the set S, do steps (3) and (4) below. s will typically be some string consisting of terminals and nonterminals. It may also be the empty string, . Recall that set S is finite and consists of strings of terminals and nonterminals, each of finite length. 3. If s = , add to F.

Chapter 7, Top Down Parsing, page 145

4. If s is not , let s = x , where x is terminal or nonterminal, and is empty or a string of terminals and nonterminals. a. If x is terminal, add x to F. b. If x is nonterminal, add each string to the set S where x is a production rule in G 5. Repeat steps 2-4 until no further additions to first (S) are possible. Note that we need to pay attention to each set addition and look for any augmentation of the set. Whenever an augmentation occurs, all the steps 2-4 should be repeated.

Discussion
This algorithm is clearly finite. Although steps (2-4) are repeated, and there is an implied recursion in step 4(b), it must terminate in a finite number of steps. Steps (3) and (4) are done as many times as there are strings in S. As to step 4(b), set S may be expanded, it can only be expanded until (at worst) all the terminal tokens are in first(S). As to this computing the true first(S), we first note that the algorithm forms the union of the first sets of all the members of S. With regard to any one string s, steps (3) and 4(a) take care of the obvious case of s being empty or starting with a terminal; those go into the first set and we ignore anything remaining. In case 4(b), our string starts with a nonterminal. We clearly want to consider all possible single derivations from that nonterminal, again a finite number. We do that by appending to S, so that this new string will be considered later for further expansion or contributions to the first set. As well see, we usually arent interested in the first set of some long string, but instead just the first set of each terminal and nonterminal in some grammar. The first set of a terminal is clearly just that terminal. The first set of a nonterminal requires examining the possible derivations from that nonterminal. However, the above algorithm clearly requires some means of defining and computing the first set of a set of strings, not just a single terminal or nonterminal.

Example 1
Consider this set of strings consisting of single terminal character strings:
S = {kla+-*, +fixture, kmax, elsie, ifk}

Then
first(S) = { k, +, e, i }

These are just the set of first characters of each of the strings in S.

Example 2
Consider grammar g0, below. We are interested in first(E), first(T), and first(F).
E E T T F F E + T T T * F F id n

Then first(F) is clearly {id, n}. F is a nonterminal, and rule (4b) above specifies that since F can derive either of these two terminals, they belong in first(F). What is first(E)? From rule (4b), first(E) should contain first(T), which should contain first(F),

Chapter 7: Top Down Parsing, page 146

which contains {id, n}. There are no other production rules that are of any interest, so this is the first set for nonterminal E. Its also the first set of nonterminal T.

Example 3
Consider this grammar, which well call G1:
G E $ E T E E + E E T F T T * T T F ( E ) F id (1) (2) (3) (4) (5) (6) (7) (8) (9) ; $ stands for EOF

What makes this interesting is that some of the nonterminals can derive the empty string, so this calls for using rules (3) and (4b) above. Clearly, first(F) = {id, ( }. What is first(T)? From production (6), token * belongs in first(T). From production (7), belongs in first(T). So first(T) = { *, }. Similarly, first(T) = first(F T) = first(F) = {id, ( }. Note that first(F) does not contain . first(E) = { +, } first(E) = first(T E $) = first(T) = {id, ( }, since first(T) does not contain . first(G) = first(E $) = { id, ( }, since first(E) does not contain

Example 4
We make a slight change in the above grammar. Call this G2:
G E $ E T E E + E E T F T T * T T F ( E ) F id F (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) ; $ stands for EOF

By adding production rule (10), we permit F to derive the empty string. That changes some of the first sets, as well see, below. first(F) = { (, id, } first(T) = { *, } ...as before first(T) = first(F T), but now F can derive , so we need to add first(T) to our set. The result is first(T) = { (, id, } first(T) = { (, id, *, }

Chapter 7, Top Down Parsing, page 147

first(E) = { +, } ...as before first(E) = first(T E ), but now both T and E can derive , so we discover that first(T) and first(E) a must be added, yielding first(E) = {(, id, *, +, }. first(G) = first(E $) = { (, id, *, +, $ } since first(E) contains .

Follow Sets on a Nonterminal


follow(X), where X is a nonterminal, is the set of derivable terminal tokens of length 1 or less that can follow X in some leftmost sentential form in grammar G. We can visualize what this means by constructing all possible sentential forms. Recall that these may contain terminals and/or nonterminals, but must be derived from the goal nonterminal in the grammar. Now inspect each sentential form and find the nonterminals in each one. Let N be one such nonterminal. If N is followed by a terminal x in some sentential form, then add x to follow(N). Note that all possible sentential forms includes all the right-most derivations, left-most derivations and in-between derivations. If you find a sentential form such that N is followed by another nonterminal P, ignore it there should be another one in which P has been expanded into a terminal string. Also, if N appears at the right end of some sentential form, then you shoujld add the empty string to follow(N). If we place an EOF token ($) at the end of each goal production rule, then there can never be a nonterminal that is at the right end of a sentential form. Thats why we wrote the goal production rule in grammar G1 like this:
E T E $

instead of like this:


E T E

More formally, follow(X) = { x | S * uXy and x first(y) } where S is the goal symbol in G and x is a terminal token in G. u and y are either empty or some string of terminals and nonterminals. That is, we consider all the possible sentential forms (derivable from S), any one of which will have the form uXy, then follow(X) will contain the first set of y. This assumes that the goal production rule is terminated with the terminal EOF ($), and therefore every nonterminal X will have some terminal (perhaps the EOF token) that follows it in a subsequent derivation.

Follow Set Algorithm


We obviously cant use this definition as an algorithm the number of sentential forms is infinite. But we can find the follow sets with a finite algorithm. Its important to have the first sets on the nonterminals ready when you run this algorithm. Also, it develops the follow sets for all the nonterminals, rather than trying to do just one at a time. 1. Start with follow(X) = , the empty set. 2. Find each nonterminal X in the right members of the production rules of the grammar. 3. Any such production rule will have the form Y X. ( and may be empty). Then add first( follow(Y)) to follow(X).

Chapter 7: Top Down Parsing, page 148

4. Repeat steps (2) and (3) until no further additions to any of the follow sets is possible. Notice that this algorithm draws upon follow sets that may be empty initially, but that are augmented as the algorithm proceeds. Whenever a follow set is augmented, its important to go through the entire set of production rules again, to make sure that some other follow set does not require an augmentation. The algorithm must clearly terminate since there are a finite number of terminals and nonterminals, hence augmentation must cease (in the worst case) when each follow set carries all the terminals.

Example
Lets work through an example, using our grammar, G1:
G E $ E T E E + E E T F T T * T T F ( E ) F id (1) (2) (3) (4) (5) (6) (7) (8) (9) ; $ stands for EOF

Weve already worked out the first sets for the nonterminals, given below: first(F) = {id, ( } first(T) = { *, } first(T) = {id, ( } first(E) = { +, } first(E) = {id, ( } first(G) = {id, (} Now we can develop the follow sets: follow(E): E is followed by ) in rule (8), so ) belongs in this set. Also, E appears in rule (1), where it is followed by $. E is in rule (3), whence follow(E) must be added to follow(E). Thus, follow(E) = { ), $ } follow(E) follow(E): E appears in rule(2), whence follow(E) = follow(E) Thus, follow(E) = { ), $ }. Since follow(E) belongs in follow(E) and vice versa, we expect no further augmentation of either of these sets. follow(T): T appears in rule (2), so we want to add first(E follow(E)) to follow(T). Since first(E) contains , this becomes {+} follow(E) = { +, ), $ }. T also appears in rule (6), whence we need to add follow(T) to follow(T). Thus, follow(T) = { +, ), $ } so far. follow(T): T appears in rule (5), whence we need to add follow(T) to follow(T). In general, follow(T) = follow(T). Chapter 7, Top Down Parsing, page 149

Thus, follow(T) = follow(T) = { +, ), $ } and we expect this to be final. follow(F): F appears in rule (5), whence we need to add first(T follow(T)) to follow(F). Since first(T) contains , this becomes { * } follow(T) In general, follow(F) = first(T follow(T)) Thus, follow(F) = { *, +, ), $ } follow(G): This is just { } since nothing can follow the EOF symbol, which is always the last token in a sentence derived from G. Its important to go back through our list using the in general formula in each case, to see if any sets can be augmented. If no sets are augmented, then we can stop. In this case, a second pass shows no augmentation in any set, so our follow sets are complete. Here is our complete list: follow(E) = { ), $ } follow(E) = { ), $ } follow(T) = { +, ), $ } follow(T) = { +, ), $ } follow(F) = { *, +, ), $ } follow(G) = { }

Follow Sets for Grammar G2


The follow sets for grammar G2 are the same as for G1, as it happens: follow(E) = { ), $ } follow(E) = { ), $ } follow(T) = { +, ), $ } follow(T) = { +, ), $ } follow(F) = { *, +, ), $ } follow(G) = { }

Making a Deterministic LL(1) Parser


Consider our grammar G1 again, and recall that the crucial decision during parsing (which occurs over and over, of course) is which of several candidate production rules applies in the next step? We already know quite a bit about the next step we know the production rules left member! With that, we can limit our choice to those production rules in the grammar with that particular member. Suppose that the left member is E. Then theres no choice the only production rule provided is
E T E

So we can immediately attach the children T and E under this nonterminal. This amounts to replacing E (as the leftmost nonterminal) with T E. However, suppose the left member is E. There are two production rules associated with this nonterminal,
E + E E

Chapter 7: Top Down Parsing, page 150

Fig. 2

E'

Given such a real choice, we can examine the next character in the input list. Notice the situation in terms of our tree-building (figure 2). We have partially constructed the tree from the top-down, so that the front end of the input string abc ... uvw has been covered by nonterminals. Now nonterminal E is expected to cover at least some of the front end of the string xyz .... This is where our first set comes into play. If E must derive the front end of string xyz..., then it should be clear that character x must either be + or in follow(E). Put another way, there must be some sequence of derivations from E, like this:
E ... , first(first()follow(E)) = x

where

a b c ... u v w (already scanned)

x y z ... (yet to be scanned)

Its often the case that E can derive the empty string, so we need to consider the follow(E) set as well. It may also be the case that xyz... is empty, in which case both the first and follow sets will contain the empty string. This is the key that we need to make our production rule decisions, in fact, all of them:
Choose the production rule A in each step such that x = first(first() follow(A)), and x is the next unread character in the input list.

Lets see how this plays out for grammar G1 above, by designing an LL(1) selector table for it: Nonterminal G E E E T T T F F Input characters id, ( id, ( + ), $ id, ( * ), +, $ ( id Production Rule GE$ E T E E + E E T F T T * T T F(E) F id first-follow set first(E $ follow(G))= first(E $) first(T E follow(E)) first(+ E follow(E))= first(+ E) = {+} first( follow(E))= follow(E) first(F T follow(T)) first(* T follow(T)) = first(* T) = {*} first(follow(T))= follow(T) first((E) follow(F)) = { ( } first(id follow(F))= { id }

Chapter 7, Top Down Parsing, page 151

By constructing this selector table, we discover two welcome side-effects we can show that every parsing decision is unambiguous. This will be the case if the sets input characters associated with a particular non-terminal are pairwise disjoint. we can catch syntax errors Regarding the parsing decisions suppose that the E nonterminal listed the input + for both of its production rules. We would then have to conclude that whenever E appeared in a parse, and the input list contains +, the parser could not decide which one to choose. So the input character sets must be pairwise disjoint for each of the nonterminals in the table. Lets look at each set of production rules:
G E $ E T E E + E E T F T T * T T F ( E ) F id ; only one rule here, no problem ; only one rule here, no problem ; first-follow is + ; first-follow is $ ), pairwise disjoint ; only one rule, no problem ; first-follow is * ; first-follow is + ) $, pairwise disjoint ; first-follow is ( ; first-follow is id, pairwise disjoint

Regarding syntax errors, notice that for any given nonterminal in the tree, about to be expanded, there is a limited set of possible input characters. If the next character isnt among these, then we have a syntax error. For example, if the nonterminal is E, and we see a + token next, thats a syntax error, since E is only compatible with id or (. The reader will also notice that we have obtained this parsing table through a finite process in fact one that can be worked out by hand! And yet, we have covered all possible input sentences, whether with syntax errors or not.

Example
Consider the input string
id*(id+id)$

Lets construct a parse for it using the selector table above. Rather than draw a lot of trees, well construct a table to follow the parse, step by step. Note that E is our goal symbol. Here is the complete algorithm: Our LL(1) parser requires a read head on the input list IP and also on the evolving sentential form SP, as follows: 1. When SP sees a nonterminal, it invokes the selector table, looking at the next character in the input list without removing it. a. The nonterminal is then replaced by its right member. b. SP is made to point at the leftmost token of the right member. 2. If SP sees a terminal token, it matches up the terminal against the terminal seen by IP. a. If they dont match, theres a syntax error (which is actually caught in step 1)

Chapter 7: Top Down Parsing, page 152

b. If they match (they should), both SP and IP are advanced one token. 3. Repeat steps 1 and 2 until the parse is complete. The match step is shown as a scan xxx in the table. Input sentential form G E$ T E $ F T E $ id T E $ id * T E $ id * F T E $ id * ( E ) T E $ id * ( T E ) T E $ id * ( F T E ) T E $ id * ( id T E ) T E $ id * ( id E ) T E $ id * ( id + E ) T E $ id * ( id + T E ) T E $ id * ( id + F T E ) T E $ id * ( id + id T E ) T E $ id * ( id + id E ) T E $ id * ( id + id ) T E $ id * ( id + id ) E $ id * ( id + id ) $ production rule GE$ E T E T F T F id scan id T * T scan * T F T F ( E ) scan ( E T E T F T F id scan id T E + E scan + E T E T F T F id scan id T E T scan ) E scan $ done OK? yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes remaining input string id*(id+id)$ id*(id+id)$ id*(id+id)$ *(id+id)$ (id+id)$ (id+id)$ id + id)$ id + id)$ id + id)$ +id)$ +id)$ id)$ id)$ id)$ )$ )$ )$ $ $

Whats Wrong with g0?


You will of course have noticed that grammar g1 is quite different from grammar g0 that we examined in chapter 4. Why didnt we use g0 for this LL(1) parser? And is g1 really equivalent to g0, in terms of the generated strings? The answer to the first question is quite simple g0 is not an LL(1) grammar. Heres g0:
G E E T T F F E E + T T T * F F ( E ) id

Now consider the first-follow set for nonterminal E: E derives E, which tells us nothing E derives T, which can derive F, which can derive sentences starting with ( or id. So we conclude that first(E) = { (, id }. Unfortunately, this doesnt suffice to distinguish the two production rules

Chapter 7, Top Down Parsing, page 153

E E + T E T

since for both of them, the right members derive the same first set: { (, id }. The same can be said of the two T productions; both right members derive { (, id }. This problem arose because nonterminals E and T are each left recursive. (Recall that a nonterminal X is left-recursive if theres a derivation X X ...). Its easy to show that any grammar with a left recursion will fail this LL(1) validity test.

Is g0 Equivalent to g1?
We need to find a grammar that is both LL(1) and correctly represents algebraic expressions. It happens that right recursion is (in general) acceptable to LL(1). Unfortunately, just changing
E E + T and T T * F

to
E T + E T F * T and

will not work. The new grammar will be LL(1), but the operators + and * will associate from right to left instead of left to right. This problem will carry over to any operator that must associate from left to right. Its easy to arrange this for an LR parser, which can handle either form of recursion, but we need to do something else for an LL parser. That something else amounts to examining which forms are generated by the two g0 rules (the E rules), and seeing if something similar can be done in LL(1). Weve shown (in chapter 4) that the two E rules generate a sequence of one or more T forms, separated by a +. We can now easily show that these three rules (from g1) do the same thing:
E T E E + E E (3) (4) (5)

Lets ignore rule (5) for the moment and consider what E generates using rules (3) and (4):
E T E T + E T + T E T + T + E T + T + T E ...

So here we have our list of one or more Ts, each pair separated by + tokens. Also notice that when E appears, its the last in the list. So far, so good. At some point, we can invoke the E rule (5), which terminates this process, yielding a finite sentence consisting of T tokens separated by + tokens. Notice that we can obtain a single T through this derivation:
E T E T

Also notice theres no way to obtain an empty string from E. We can obtain any number of T separated by + as shown above. When we have as many as we desire, the derivation is stopped by applying the empty E rule. So the E and E production rules are equivalent to those in g0 in the sense of generating similar strings. Is the associativity of + the same, i.e. left-to-right, or not, in both grammars? It is. To see that associativity remains left-to-right, note that the top-down LL(1) parser grinds out each of Chapter 7: Top Down Parsing, page 154

the + symbols from left to right. So we could generate operations in the same order as the parsing, which will have the correct associativity. Also a derivation tree (left for an exercise) will show that the leftmost + will fall under the remaining + symbols, which proves that left-to-right associativity is satisfied. A similar argument can be applied to the T and T production rules, to show that they, too, are now both LL(1) and cause * to associate from left to right. Of course, the * operator also has higher precedence than + for the same reasons as discussed in chapter 4.

Summary
Weve introduced a simple parsing method that serves for top-down parsing. A parsing engine can be constructed using first and follow sets defined on nonterminals. Its easy to understand how it functions. However, LL(k) parsers have a significant defect in that left-recursive production rules must be rewritten to avoid parsing conflicts. In fact, it can be shown that the grammars that are LL(k) are a proper subset of the LR(k) grammars. The number of parsing steps for LL(1) tends be significantly larger than for LR(k), and in general, more production rules are required for a given grammar. For these reasons, the LL(1) parsing approach is rarely used today. We have explored it primarily for the parsing exercise and the sake of completion in examining different parsing approaches. The theory of LL(k) grammars also forms the basis for recursive-descent parsing, discussed in the next chapter. These are used extensively for small and large languages, are easy to grasp, and require no special parsing tools to implement.

Chapter 7, Top Down Parsing, page 155

Chapter 7: Top Down Parsing, page 156

Chapter 8: Parsing with Syntax Diagrams


W. A. Barrett, San Jose State University nch8.doc

Introduction
Recall that a parser is a program that collects tokens into clauses of a language. A clause is some logical component of a program that carries a simple meaning with respect to the program's operations. A clause may cover a rather large portion of the source file, and include sub-clauses. We can think of an if-then-else component as a clause, even though each of the sub-clauses may be very large. Here's what an if-then-else clause might look like as part of a program:
IF BooleanExpresion THEN Statement ELSE Statement

The BooleanExpression and the two Statements are themselves clauses, and could extend over many lines. One of the Statement clauses could be another if-then-else clause. We can talk about the general properties of such a clause without knowing any of the details of its sub-clauses. In this case, we can claim that, at runtime, the BooleanExpression will be evaluated. It's expected to return a true or a false value. If its value is true, then the first Statement is executed, but not the second. If the Boolean's value is false, then the second Statement is executed, but not the first. It's clear that our compiler can generate assembly code based on such a clause, again without knowing any of the details of the three sub-clauses. Here's what a compiler might generate for this clause:

identifier A letter

letter digit

C B

{assembly code to evaluate BooleanExpression; result 1 or 0 left in AX}

Compare the Boolean value to 0


cmp ax,0

E F Fig. 1. Syntax diagram for an identifier

If the result is "equal", we have a false condition, so jump to the ELSE clause
je $LAB_005

If the result is "not equal", we have a true condition, which "falls through" the je instruction. This is the first Statement.
{assembly code for the first Statement}

The next jmp skips over the else statement. We don't want to execute it if we've done the first statement.
jmp $LAB_006

Here's where the second statement begins


$LAB_005: {assembly code for the second Statement} $LAB_006:

Before we get into just how to generate assembler code for clauses, we need to discuss a way of describing a programming language. It turns out that regular expressions, while excellent for language tokens, are not sufficiently powerful to describe a programming language. We'll look at two widelyused approaches to this problem, first syntax diagrams, then (in later chapters) context-free production

Chapter 8: Parsing with Syntax Diagrams, page 157

rules.

Syntax Diagrams
A syntax diagram is a directed graph with: a unique name, a unique entry node, paths through other nodes containing tokens and names of syntax diagrams, and an exit point A simple syntax diagram is given in Figure 1. This diagram describes an identifier. You can think of any one diagram as a finite-state machine, except that we use a different notation style. The states are points on the directed edges, for example, points A, B, C, D, E, F, G in Figure 1. Instead of characters along the edges, we scan tokens, which are shown in rounded boxes. The tokens in Figure 1 are letter and digit. An identifier can be generated or scanned by this diagram by following the arrows through the diagram. You can think of the diagram as a kind of railroad track, with the tokens as a kind of railroad station in which the train drops or collects mail. The left-most point is usually the starting state, point A. From A, it's clear that a letter must be scanned. That takes us to state B. Notice the rounded fillets at the track junctions. These are like railroad track switches. You aren't allowed to move from A through letter, then to D, because the fillet is directed the wrong way, and the arrow on the track near D suggests going left, not right there. So you must move to point B. From point B, you have three choices, to go to point C, to point G, or to point E. (The fillets and arrows permit any of these three). Clearly if the next token is a letter, then we should follow the track to point C, and accept the letter, taking us to point D, which can only lead back to B. If the next token is a digit, then we should follow the track to point E, accept the digit, taking us to point F, which can only lead back to B. It's clear that we can accept any number of letters or digits with this diagram by making the appropriate choice at point B. When the next token is neither a digit or a letter we should go to point G, and thence out of this diagram. For example, the identifier
kk30b

would be scanned by the following state sequence, or path through this diagram. Notice that we accept a letter token between states A and B, also between C and D, and a digit token between states E and F.
A <k> B C <k> D B E <3> F B E <0> F B C <b> D B G

Identifier Described with Production Rules


Weve seen previously how to describe an Identifier with a regular expression:
[a-zA-Z][a-zA-Z0-9_]*

We can also use production rules to describe an Identifier, like this:


Identifier Letter ( Letter | Digit | '_' )*

where Letter stands for any alphabetic character [a-zA-Z] and Digit for [0-9]. This form (or the syntax diagram) immediately lends itself to a recognizer function, like this:

Chapter 8: Parsing with Syntax Diagrams, page 158

bool Identifier() { if (isLetter(nextChar())) { fetchChar(); // advance read head while (isLetter(nextChar() || isDigit(nextChar()) || '_' == nextChar()) { fetchChar(); } return true; } else return false; }

You'll observe that this also follows the syntax diagram. If the next character is a letter (isLetter(nextChar()), then we advance the read head (fetchChar). We continue to advance the read head through more letters, digits and underbars, stopping when the next character isn't one of those.

A Larger Example
Figure 2 gives a larger example, one that defines an unsigned number in terms of an unsigned integer.

unsigned integer digit unsigned number A unsigned integer B

C . . unsigned integer

D E G + unsigned integer

F H

Fig. 2. Two syntax diagrams describing "unsigned integer" and "unsigned number"
This contains two syntax diagrams, one for each of these clauses. The unsigned integer diagram should be clear from our discussion of identifier. We start at the left edge, then can accept any number of digit tokens until we choose to exit from the right edge. The unsigned number diagram introduces a new concept, one that makes syntax diagrams significantly different from a simple FSM. We can refer to one diagram by name in another diagram. unsigned integer In this case, we refer to the unsigned integer diagram by enclosing that name in a rectangular box, like this: The idea is that in forming an unsigned number, we can draw upon the unsigned integer diagram to help us form subclauses. So, in moving from point A to point B in the unsigned number diagram, we in fact are expected to work completely through the unsigned integer diagram. We might also have to

Chapter 8: Parsing with Syntax Diagrams, page 159

work through the unsigned integer diagram in getting from point B to D along the lower path, and from point D to point F along the upper path. This makes it quite clear that an unsigned number must always start with an unsigned integer, for example, with
156

At point B (fig. 2), we can choose to scan a decimal point, or not, depending on the circumstances. If we see a decimal point next in the source, then we accept that token, and prepare to scan another unsigned integer. This takes us to point D. Our number so far might look like this:
156.7023

where the 156 was formed by the first invocation of the unsigned integer diagram and the 7023 by the second invocation of that diagram. At point D, a choice is provided. We can scan a letter E, or follow path G to H thence out of the diagram. If we scan an E, we can accept a "+" or "-" sign (or not), and then a third unsigned integer. That takes us to F, then H, and out of the diagram. For example, the complete number
156.7023E-3

would be accepted by this diagram. Of course, an unsigned number can be expressed by a regular expression and embedded in a DFSM. In fact, that's how identifiers and numbers are recognized in most compilers. We prefer to relegate such tasks to a DFSM rather than a syntax diagram parser, although it could be done either way. In any case, a scanner is still needed to read an input file and skip comments, so it might as well also collect these larger token components. This diagram accurately defines the syntax of a Pascal number, whether floating point or integer. Notice that if a decimal point is present, there must be at least one digit (possibly 0) preceding and following the decimal point. Although it's not shown here, the E of the exponent portion can also be in lower case, i.e. e.

Arithmetic Expressions
Figure 3 is a set of four syntax diagrams. This set of diagrams describes an arithmetic expression. An expression is some algebraically legal combination of addition, subtraction, multiplication and division operators, with parentheses and unary minus and unary plus. Here are some expression forms that the syntax diagrams in Figure 3 will accept:
25.15 ; a number by itself is an expression a22 ; an identifier by itself is an expression x+y-z*22/13 ; a compound algebraic expression -15+16.5*x22 ; the 15 is preceded by a unary 'minus' a*(b-c) ; another compound expression, using parentheses (((-b))) ; any number of nested parentheses are OK

Tracing even the simplest of these expressions through the diagrams can be tedious, but it's worth doing a few times to become familiar with how they describe sentences in a language. We'll use these diagrams to construct a recursive-descent parser for expressions, which will be a program that effectively traces through the paths in a set of syntax diagrams.

A Simple Example
Let's try tracing these diagrams (figure 3) with a very simple string in mindthe number
25.15

We start in the expression diagram on the left edge. It directs us into the simple_expression diagram. Chapter 8: Parsing with Syntax Diagrams, page 160

The simple_expression diagram asks whether we see a + or - sign or something else. We don't, so we take the short-cut path to the term box. Now we need to keep track of which boxes we are in. We'll do that by writing down the diagrams that we've entered, then indenting across when we enter another one, like this:
expression .simple_expression ..term(1)

This means that we've entered expression, then simple_expression, then term, without exiting from any of them (yet). The number (1) refers to the first of the two term boxes found in the simple_expression diagram. The term diagram requires that we enter the factor diagram, so we have this trace now:
expression .simple_expression ..term(1) ...factor(1)

The factor diagram says that we can accept the token number, which is what we have (25.15). This will normally be found by a scanner, rather than by going into more diagrams, as we've explained above. That's also why we've shown the number and identifier boxes with rounded ends rather than square endsthey are tokens not syntax diagrams. Having scanned the number, we can exit the factor diagram. Our trace then looks like this:
expression .simple_expression ..term(1) ...factor(1) ....[scan number: 25.15] ...[exit factor]

Chapter 8: Parsing with Syntax Diagrams, page 161

expression simple_expression

simple_expression + term + term

term * factor / factor

factor

unsigned_number identifier

expression

Figure 3. Syntax diagrams for an arithmetic expression

Chapter 8: Parsing with Syntax Diagrams, page 162

We are now back in the term diagram, just past the leftmost factor box. There isn't any next token, so we have to take the upper path that leads out of the term diagram:
expression .simple_expression ..term(1) ...factor(1) ....[scan number: 25.15] ...[exit factor] ..[exit term]

We are back in the simple_expression diagram, just past the leftmost term box. Once again, there's no next token, so we have to take the upper path that leads out of the simple_expression diagram:
expression .simple_expression ..term(1) ...factor(1) ....[scan number: 25.15] ...[exit factor] ..[exit term] .[exit simple_expression]

We are about to exit the expression diagram, and there are no further exits after that. This is where we came in!
expression .simple_expression ..term(1) ...factor(1) ....[scan number: 25.15] ...[exit factor] ..[exit term] .[exit simple_expression] [exit expression]

A More Complicated Example


Let's trace something more interesting, an expression with two operators and a pair of parentheses:
a+b*(c-d)

We'll talk through part of this, then leave the rest of the trace to the reader. As before, we start with an expression call, then enter the simple_expression diagram. There's no "+" or "-" sign in front (we see the token a instead), so we have to enter term. We then have to enter factor, and in that, we can accept an identifier token (the "a"), then exit. Here's the trace so far:
expression .simple_expression ..term(1) ...factor(1) ....[scan identifier: a] ...[exit factor]

Notice that we've outdented the "exit" mark so that it lines up with the factor two lines above. At this point, we're just past the leftmost factor box in the term diagram. The next token is "+", and that doesn't match either of the "*" or "/" tokens permitted next in this diagram. So we have to take the exit path:
expression .simple_expression ..term(1) ...factor(1) ....[scan identifier: a]

Chapter 8: Parsing with Syntax Diagrams, page 163

...[exit factor] ..[exit term]

We're now just past the leftmost term box in the simple_expression diagram. We can take the straight-through path to the token "+", because that's what we see next. And that leads to entering term a second time:
expression .simple_expression ..term(1) ...factor(1) ....[scan identifier: a] ...[exit factor] ..[exit term] ..[scan "+"] ..term(2)

This requires entering factor(1), which can accept the next token (b), then exit:
expression .simple_expression ..term(1) ...factor(1) ....[scan identifier: a] ...[exit factor] ..[exit term] ..[scan "+"] ..term(2) ...factor(1) ....[scan identifier: b] ...[exit factor]

At this point, the next token is "*", and the term diagram can accept that. After scanning it, we're back in the factor diagram, with a left parenthesis "(" as the next token:
expression .simple_expression ..term(1) ...factor(1) ....[scan identifier: a] ...[exit factor] ..[exit term] ..[scan "+"] ..term(2) ...factor(1) ....[scan identifier: b] ...[exit factor] ...[scan "*"] ...factor(2) ....[scan "("] ....expression

After scanning "(", we enter the expression diagram again. This is like starting over at the top of the syntax diagram set, but we are doing it with some history that needs to be unfolded later. When we manage to work through the expression diagram this second time, we aren't done. We have to return to the exit point of the expression box in factor. Our indented sequence trail helps remind us of that history trail. But let's work through the expression. This will scan the string
c-d

before our trace falls back into the factor diagram:


expression .simple_expression

Chapter 8: Parsing with Syntax Diagrams, page 164

..term(1) ...factor(1) ....[scan identifier: a] ...[exit factor] ..[exit term] ..[scan "+"] ..term(2) ...factor(1) ....[scan identifier: b] ...[exit factor] ...[scan "*"] ...factor(2) ....[scan "("] ....expression .....simple_expression ......term(1) .......factor(1) ........[scan identifier: c] .......[exit factor] ......[exit term] ......[scan "-"] ......term(2) .......factor(1) ........[scan identifier: d] .......[exit factor] ......[exit term] .....[exit simple_expression] ....[exit expression]

You should verify that the exits occur correctly where they do by noticing that the token following "d" is a right parenthesis ")", and this isn't accepted in any of the diagrams except factor. None of the movements will occur by "chance". They will always be dictated by making a move that's compatible with the next token and where we happen to be in the diagram. Well, we have just exited expression, and that takes us to the exit point of the expression box in the factor diagram. We expect to see a right parenthesis next (isn't that lucky?), so we scan it and exit. Since we're at the end of our expression, there's nothing to do but exit from the other diagrams. (But a token check should be made at each stage before exiting). Here's the remainder of the trace, picking up where we left off:
....[scan ")"] ...[exit factor] ..[exit term] .[exit simple_expression] [exit expression]

Why the Expression Syntax Diagrams Mimic Algebra


Although it isn't completely obvious, the expression syntax diagrams generate (or parse) sentences that are always correct algebraic expressions. During parsing, it will reject non-algebraic sentences. Here's how we can demonstrate this valuable property: If an arithmetic expression has a left parenthesis anywhere, then there must a matching right parenthesis later. This follows from the factor diagram, which is the only one carrying parentheses. If this is entered by accepting some left parenthesis, we will have to work through an expression diagram (which might involve more parentheses), but eventually must come back out to match that left parenthesis with a right parenthesis.

Chapter 8: Parsing with Syntax Diagrams, page 165

There must be something between a left parenthesis and its matching right parenthesis. This follows from the observation that each of the diagrams must scan something, even if it's just a number or identifier. That something will of course be a legal arithmetic expression, assuming that's what expression generates. The simple_expression diagram accepts a clause that optionally starts with a "+" or "-" (the unary plus or minus) followed by a sequence of one or more term clauses, and each such pair must be separated by "+" or "-". This should be clear from the structure of the diagram's paths. But this is also a definition of a legal formula involving unary and binary addition/subtraction operators. Put another way, a simple_expression accepts the regular expression
( \+ | - )? term ( ( \+ | - ) term )*

The term diagram accepts a clause that is a sequence of one or more factor clauses, such that each pair of factor clauses must be separated by "*" or "/". This is a definition of a legal formula involving binary multiplication and divison. Put another way, a term accepts the regular expression
factor ( ( \* | / ) factor )*

Although we haven't proven it yet, these diagrams are deterministic and unambiguous. There's only one way to accept any given (legal) algebraic expression. This is in spite of the fact that certain of the branching points seem to provide an arbitrary choice. However, you must follow the rule that if the next token is compatible with one of the immediate choices provided in the diagram, then you must follow that path, scanning the token. For example, if you are just past the factor box in the term diagram, and the next token is "*", then you must follow the path through the "*" token. You are not permitted to exit from the diagram. (That would lead to being unable to complete the parse). Illegal algebraic expressions are not accepted by these diagrams. The idea is the same as with an FSM. It's possible to block at various places in the diagrams. For example, if the next token when entering the factor diagram is neither an identifier, a number, nor a left parenthesis, then the acceptance is blocked. Since the set is deterministic, this means there's a syntax error at that point. The reader is invited each of the following (illegal) algebraic strings to see that this is the case:
++a a*-b a*b c+d b*)x+y( b*(a-b

Attaching Semantics
Having an engine that can parse sentences may be interesting, but we usually want more than that. We want it to evaluate the sentence, or generate some assembly code in response to the sentence. We'll show just how to do that later, but for now, please note that if we can parse an input sentence by tracing through the diagrams, we can also do something on particular transitions. For example, consider the simple_expression diagram, and the leading + or - operator. It's clear that we want this to apply to the following term, whatever it is. So we need to scan the + or - operator, then wait until term is parsed. At that point, we have an operator and an operand, so some kind of action should be generated just past the first term box to perform that operation. Similarly, just past the second term box, we have two operands (the previous term and the current term), and a binary operator (+ or -). So that's the place to perform the binary operation.

Chapter 8: Parsing with Syntax Diagrams, page 166

To make all this work, we need to follow an evaluation consistency rule, which can be stated like Make each box carry some value. Assume that it has that value as a box, and make sure that each diagram supplies the value. this: What is the value of a diagram or box? That depends on what we are doing. If the purpose of parsing an algebraic sentence is to evaluate it numerically, then each diagram and box should carry something like a double-precision floating-point number. If the purpose of parsing is to generate assembly code, then we should think of the code, when ultimately assembled and executed, as yielding a value in a certain register or memory location. One can also make that idea totally abstract by calling functions that could do either one. If you accept this evaluation model (it's not the only conceivable one, by the way), then we can also show that the binary operators +, -, *, / follow the usual algebraic rules of precedence. That is, in the expression
a+b*c

the multiplication * will be evaluated before the addition +. Also, in the expression
a+b-c

the evaluation of the + and - are done left-to-right. In general, a sequence of + and - operators are evaluated from left to right. So is a sequence of * and / operators. However, * and / have higher precedence than + and -, so they get evaluated first. Also, any expression contained in parentheses (...) is evaluated before any operation on the parenthesized expression. To see why this is so, notice that any expression containing binary * and / operators is parsed by the term diagram. But this is embedded in the simple_expression diagram as a box. Therefore any nonparenthesized expression containing +-*/ binary operators will be segmented into term portions (containing only */ operators) separated by binary + and - operators, as we've seen. But each of the term portions will be parsed and therefore fully evaluated before their associated + or - operators can be. An expression embedded in parentheses must be evaluated before any operator involving, by the same reasoning. The parenthesized expression is parsed by the factor box, which falls under the term box, and must therefore be completed before any term operator can be done. We'll give some examples of this idea later when we actually write a postfix evaluator, a postfix generator and some simple compilers. You'll see that the generated operations fall out in the expected order of precedence. So the diagrams of figure 3 not only define a structure for algebraic expressions, but also supply the precedence rules for the operators.

Syntax Diagrams Can't be Folded into an FSM


We've seen that one diagram is very much like a FSM. It has states, and transitions between states on tokens. The only difference is that it can contain other diagrams (the rectangular boxes). Now suppose that we try substituting a copy of a named diagram into every box containing its name. We won't draw this monster diagram, because it gets very big very fast. But let's imagine that... Instead of having an expression call to enter a simple_expression diagram, we actually draw an expression diagram into the simple_expression box. (This is rather like substituting a macro for a subroutine call). That gets rid of the simple_expression box, and in fact, we get the same thing by just changing the name of the simple_expression diagram to expression. (There's a reason for our not doing this in the first place; we want to introduce more operators later on).

Chapter 8: Parsing with Syntax Diagrams, page 167

We now have an expression diagram containing two term boxes. Let's replace each of those boxes with a copy of the term diagram. We now have an expression diagram containing four factor boxes. (Each term box contributes two of these). Let's replace each of them with a copy of the factor diagram. It appears that we've succeeded in creating a single large diagram for the whole set, except for one slight problem: we have an expression diagram that contains an expression box! If we try replacing the expression box by a copy of our diagram, we could do it, given a large enough sheet of paper, but it'll still contain yet another expression box.

expression

expression expression expression ad infinitum...

Fig. 4. An infinite regression of expression diagrams


It's clear that we can't do this forever. Our diagram would look like figure 4, with an infinite regress of expression boxes inside expression boxes. Incidentally, it won't do to just move the arrow into the expression box back to the beginning of the expression diagram. The crucial point of the embedded expression box is that it has a certain position within the structure of the algebra, and that would be defeated by just moving back to the start node. Also, we'll see that some diagrams contain several references to themselves, and they need to be in an appropriate clause environment.

Recursive-Descent Parsing
In general, there's no way to design an FSM that's equivalent to a set of syntax diagrams, because of the recursive nature of some sets of diagrams. We must consider a different machine that can work through these diagrams. That machine is a recursive-descent parser. It's essentially a set of finite-state automata coupled with a push-down stack. The finite state automata are used to navigate a particular diagram. The push-down stack is essentially used to support the enter/exit protocol we need when one diagram requires another one. The push-down stack will come free of charge if we use function calls in a high-level language, in which recursive function calls are supported. Here's the implementation mechanism we propose: Prepare a lexgen description of all the tokens in the diagrams. A token is a synonym for a terminal. The diagram names are nonterminals, and should be ignored in making a token list. You can use directory lextest to prepare a lexical file and generate a scanner. We'll give a more

Chapter 8: Parsing with Syntax Diagrams, page 168

detailed example of that next. lexgen provides the scanner, so that our diagram parser can concentrate on tokens, and not bother with whitespace. Declare a function for each diagram in a syntax diagram set. We might as well give the functions a name similar to that of the diagram's name. For example, we'll want a function for the expression diagram, which we might call parse_expression, or just expression. We'll want another function for the simple_expression diagram, one for the term diagram, and one for the factor diagram. These functions could be ordinary C functions or a member function of some class. We'll use them both ways. Each function is expected to parse the diagram it's associated with. It doesn't have to pay any attention to other diagrams. Each function needs to do the following, in general: The function starts at the left edge of the diagram, and is expected to work through all possible paths through its source code. When it reaches an exit edge, it returns. The function is expected to check the "next" token (supplied by the scanner) against a token coming up next in the diagram. If a match can be found, then the scanner token is scanned, and we branch to a point just past that token in the diagram. We say that the diagram's token is accepted in this process. If the diagram calls for entering another diagram, i.e. has a rectangular box reference to another diagram, then we call that diagram's function. The call will of course push a return address on the compile-time stack, which is effectively a marker indicating where to return. If the next token doesn't match any of the choices provided at a decision point, then it should declare a syntax error.

Preparing a Scanner with Lexgen


We'll now implement the figure 3 syntax diagrams following this plan. Our diagrams contain these tokens:
+ * / ( ) Identifier Real Integer

; "Real" and "Integer" are the "unsigned number" token

Let's assume these are Pascal-style tokens, and that we wish Pascal-style comments and whitespace. The appropriate lex file will then be as follows:
# Pars1.lex # LEXICAL FRAMEWORK for a sample expression parser casesens on WhiteSpace getWhiteSpace [ \t\n]|\(\*[\t\n -~]*\*\)|\{[\t\n -\|~]*\} EOF getEOF [\04] Identifier getPIdent [A-Z][_A-Z0-9]* Integer getInteger [0-9]+ Real getFloat [0-9]+(([.][0-9]+)|([E][\+\-]?[0-9]+)|\ ([.][0-9]+[E][\+\-]?[0-9]+)) # Literal tokens + + + + *

Chapter 8: Parsing with Syntax Diagrams, page 169

+ / + ( + )

Call this file Pars1.lex. You will also need your copy of directory lib. Build the lib directory files by executing make in that directory. Then build a trial scanner by executing
make LEX=Pars1

in the lextest directory. (This works for Unix. For a Microsoft compiler environment, use nmake instead of make). If this goes well, you should be able to execute file lextest.exe and check that it accepts each of the tokens in these diagrams. Now look at file Pars1lex.cpp, which has been generated by lexgen. It should have two functions in it, Clexf::initNames, and Clexf::getToken. The initNames function is called once when a Clexf object is created. It contains a list of your tokens, and their assigned token codes. Here's how that list came out for file Pars1.lex:
tokenNames[0]= "WhiteSpace"; tokenNames[10]= ")"; tokenNames[9]= "("; tokenNames[8]= "/"; tokenNames[7]= "*"; tokenNames[6]= "-"; tokenNames[5]= "+"; tokenNames[4]= "Real"; tokenNames[3]= "Integer"; tokenNames[2]= "Identifier"; tokenNames[1]= "EOF";

This says (for example) that token 9 is the left parenthesis token. Token 2 is the Identifier token. The whitespace token will never be seen, and we can also ignore the EOF token. We'll use the other token numbers in the parser that we're about to write. The main function of the scanner tester program is in file lextest\lextest.cpp. (This is not a generated file, so you can safely modify it). Here's what that looks like, approximately: int main(int argc, const char **argv) { progname= argv[0]; if (argc != 2) giveHelp("expecting a file name"); Clexf lex; const char* fname= argv[1]; if (lex.open(fname)) { while (!lex.atEOF()) { Ctoken& token= lex.nextToken(); token.dump(); cout << endl; lex.tokenRead(); } lex.close();

Chapter 8: Parsing with Syntax Diagrams, page 170

} else giveHelp("unable to open file"); return 0; } The scanner is object lex, whose class is Clexf. (This class definition can be found in lib\lexf.h and lib\lexf.cpp). That object is opened with a file name, which prepares it for scanning operations. The first token is already in place. You can obtain a reference to the token through the member function lex.nextToken(). However, you'll might want to see the token number instead, and that's returned by the member function lex.tokenNumber(). The number that's returned will be the same one that appears in the initNames list. You can call lex.tokenNumber or lex.nextToken as many time as you like. It does not advance through the next token. When you call lex.tokenRead(), the scanner does advance through that token. That's the pattern used in the main function given above. lex.nextToken is called to fetch a pointer to the token, the token is printed through token,dump(), then that token is scanned over through lex.tokenRead, releasing a new next token. We test for lex.atEOF which becomes true just after the last token is scanned (end-of-file), after which object lex can be closed. Instead of just reading tokens, we're going to call a parsing function expression. This is the "goal" syntax diagram in figure 3, and we expect to parse through any algebraic sentence that it's given. So the main function should be changed to this:
Clexf lex; // the expression function goes here int main(int argc, const char **argv) { progname= argv[0]; if (argc != 2) giveHelp("expecting a file name"); const char *fname= argv[1]; if (lex.open(fname)) { expression(); if (!lex.atEOF()) cerr << "**unwanted material following the expression" << endl; lex.close(); } else giveHelp("unable to open file"); return 0; }

The idea here is that function expression is supposed to scan through a complete algebraic expression, then return. It's possible that after scanning something that matches an expression pattern, there's more stuff left over, like this:
(a+b)*c)

To catch that error, we need to test for an EOF following the expression, which is why we placed the lex.atEOF() call after the expression() call. Notice that we've moved the Clex declaration out of main and into the global scope. It should precede any functions you introduce so that they can reference lex. We're now ready to design the parsing functions expression, simple_expression, term and factor

Chapter 8: Parsing with Syntax Diagrams, page 171

shown in figure 3.

Writing Parsing Functions


The simple_expression diagram can be implemented with this code:
void simple_expression(void) { if (lex.tokenNumber() == 5 || // + lex.tokenNumber() == 6) // lex.tokenRead(); term(); while (lex.tokenNumber() == 5 || lex.tokenNumber() == 6) { lex.tokenRead(); term(); } }

This rests on our observations that the decision required just after the leftmost term box is the same as the one required just after the rightmost term box. It also says that a simple expression is a sequence of term sentences separated by + or - operators, with an optional + or - in front. The reader should now be able to write a structured function for parsing the term diagram that's similar to the simple_expression function. The factor diagram is clearly expressed by the following structured function:
void factor(void) { if (lex.tokenNumber() == 3 || // Integer lex.tokenNumber() == 4 || // Real lex.tokenNumber() == 2) { // Identifier lex.tokenRead(); } else if (lex.tokenNumber() == 9) { // ( lex.tokenRead(); expression(); if (lex.tokenNumber() == 10) // ")" lex.tokenRead(); else { cerr << "Syntax error, expecting )" << endl; exit(-1); } } else { cerr << "Syntax error, expecting ( Real Integer Identifier" << endl; exit(-1); } }

Here we have two syntax error possibilities. Upon entering the function, if the next token isn't a number, identifier, or a left parenthesis, we must have a syntax error. On a syntax error, we just complain through a message to cerr, then exit the program. Later, we'll see how we can make an intelligent recovery from syntax errors.

Chapter 8: Parsing with Syntax Diagrams, page 172

The other syntax error can occur after returning from the expression function call. We should then see a right parenthesis. If not, then a syntax error has occurred.

An Infix to Postfix Translator


Infix is what most of us are used to. Its the language of algebra. Infix means that the operator is between two operands, for example,
a + b

The syntax diagram in figure 3 describes infix expressions. Postfix means that the operator is written after its two operands, for example,
a b +

Postfix is used in some HP calculators. The idea is that you enter the first number (a), press the push key, enter the second number (b), press the add key. The sum should appear in the HP calculator window. To make sense of this, you need to think of the display window as the top number of a stack of numbers. When you press push, a copy of the displayed number is pushed into the calculator's internal stack. (It also remains in the display). When an operator key (such as add) is pressed, the number in the display and the one at the top of the internal stack are added. The internal stack is popped and the result appears in the display. So the rules for evaluating postfix are: on an operand, push the operand value in a stack on an operator, pop the operand or operands (may be 1 or more, depending on the operator) from the stack, perform the evaluations, then push the result in the stack. There are similar rules for generating postfix from infix but they are complicated by the need to identify the precedence of the operators. (Thats where our translator comes in) on an operand, write out the operand parentheses are not considered as an operator operators are written out following their operands, but only in precedence order, with the higher-precedence operators appearing before the lower precedence operators. Heres an example of infix to postfix:
infix: ((-17 + 49)/4 - 2*3)*(9-3+2) equivalent postfix: 17 neg 49 + 4 / 2 3 * - 9 3 - 2 + *

Notice that the 17, 49, 4, 2, 3, appear in the postfix in the same order as they appear in the infix. However, the operators appear in precedence order. The unary minus applies to the 17, and so must appear just after it. The * between ) and ( comes at the very end, since each of its operands must be evaluated first.

An Infix to Postfix Translator


Let's now design an infix to postfix translator, using the parsing functions developed for the algebraic expressions given in figure 3. From the traces we've made of these diagrams, and their structure, it should be clear that:

Chapter 8: Parsing with Syntax Diagrams, page 173

We can use the lexgen functions developed in chapter 3 to open the source file and find the tokens in it. An infix expression is scanned from left to right. The factor diagram is invoked for each operand (number or identifier) in the order in which it appears in the infix. This means that we can just emit any identifier or number string when it appears in the factor diagram. In the term diagram, we've seen that the leftmost factor is parsed, then one of the operators "*" or "/" is scanned, then the second factor is parsed. This corresponds to the notion of binary infix, i.e. the first operand followed by the operator followed by the second operand. This pattern can be repeated for several infix operands in series through the feedback loop. We therefore want to emit an operator symbol just after the second factor operand is parsed. The simple_expression diagram is very similar to the term diagram, except for the initial unary "+" or "-". Thus we should emit an operator symbol just after the second term operand is parsed. For the unary "+" or "-", if we see a "-", we'll emit a neg operand just after the leftmost term is parsed. We can just ignore a unary "+", since it doesn't change anything.

Let's put all this into structured C++ code. Well call it infix.cpp, by modifying lextest.cpp in directory lextest. Here are the changes needed: We've moved the Clexf lex; declaration from a local position to global so that our functions can access it. We've replaced the scanner dumping code in main with a call to expression(), as explained above. Weve added a test for EOF just after the expression() call to make sure that no garbage appears after a proper expression. We've added our postfix generator code (expression, simple_expression, term, factor). These and the main function are given below.
// prototypes for the functions void simple_expression(void); void term(void); void factor(void); void expression(void) { simple_expression(); } void simple_expression(void) { int op= 0; if (lex.tokenNumber() == 5) // + lex.tokenRead(); else if (lex.tokenNumber() == 6) { op= 6; lex.tokenRead(); } term();

// -

We need to capture and save the token number here because it'll change on the next tokenRead call

Chapter 8: Parsing with Syntax Diagrams, page 174

Here's where we emit a unary negation of the previous term, if the operator were unary "-"
if (op == 6) cout << " neg"; while (lex.tokenNumber() == 5 || lex.tokenNumber() == 6) {

We need to capture and save the token number here because it'll change on the next tokenRead call
op= lex.tokenNumber(); lex.tokenRead(); term();

Here's where the binary postfix operator is emitted for the preceding two term function calls
if (op == 5) cout << " +"; else cout << " -"; } } void term(void) { int op= 0; factor(); while (lex.tokenNumber() == 7 || lex.tokenNumber() == 8) { op= lex.tokenNumber(); lex.tokenRead(); factor();

// * // divide, /

We need to capture and save the token number here because it'll change on the next tokenRead call

Here's where the postfix operator is emitted, based on our saved op value
if (op == 7) cout << " *"; else cout << " /"; } } void factor(void) { if (lex.tokenNumber() == 4 || lex.tokenNumber() == 3 || lex.tokenNumber() == 2) {

// Real // Integer // Identifier

The member function getStringValue delivers the string equivalent of any of the above tokens.
cout << ' ' << lex.nextToken().getStringValue(); lex.tokenRead(); }

We don't have to add any postfix generation code for this part.
else if (lex.tokenNumber() == 9) { // ( lex.tokenRead(); expression(); if (lex.tokenNumber() == 10) // ) lex.tokenRead(); else { cerr << "Syntax error, expecting )" << endl; exit(-1); } } else {

Chapter 8: Parsing with Syntax Diagrams, page 175

cerr << "Syntax error, expecting number, identifier or (" << endl; exit(-1); } } int main(int argc, const char **argv) { progname= argv[0]; if (argc != 2) giveHelp("expecting a file name"); const char *fname= argv[1]; if (lex.open(fname)) { expression();

After an expression is parsed, we should see an end-of-file, otherwise there's surplus stuff following the expression.
if (!lex.atEOF()) { cerr << "Syntax error, input not fully parsed" << endl; } lex.close(); } else giveHelp("unable to open file"); return 0; }

This can be compiled and built if you have a C++ compiler, and the lexfile utility is installed. Here's a run of the resulting executable file on the infix expression given above:
infix infix1.in

This is the input line


; ((-17 + 49)/4 - 2*3)*(9-3+2)

and this is the generated output, with no syntax error complaints


17 neg 49 + 4 / 2 3 * - 9 3 - 2 + *

A Calculator
Instead of printing postfix, we can easily modify our parser to directly perform the calculations. This will be a simple interpreter. It's designed to accept an arbitrary arithmetic expression (all numbers), then deliver the resulting numeric value. The source code will be very similar to the one we've written above that produces postfix. However, we need to make a few changes, as follows: We need to introduce a pushdown stack for the numbers to be used during evaluation, which is now integrated with parsing. Instead of printing each number, well push the numbers value (in floating-point form) into the stack. Instead of printing each operator, we'll evaluate the operator by popping the stack (usually twice), doing the evaluation on the popped members, then push the result. To evaluate and report an expressions value, we'll call expression(). When it returns, we'll pop the stack and print the value of the result. We won't try to support identifiers, since we haven't developed a declaration mechanism, a symbol table or any way to set a variable value. We'll call this calc.cpp. As before, it's a modification of the base lextest.cpp file in directory Chapter 8: Parsing with Syntax Diagrams, page 176

lextest. Here is a partial listing of this file to illustrate the changes made:
// calc.cpp

A simple stack is implemented with this array, push and pop functions.
#define SSIZE 50 double stack[SSIZE]; int tos= -1; // index of stack top void push(double value) { stack[++tos]= value; } double pop(void) { return stack[tos--]; }

The remainder of the file is almost the same as the infix file given above.
// prototypes for the functions void simple_expression(void); void term(void); void factor(void); void expression(void) { simple_expression(); } void simple_expression(void) { int op= 0; if (lex.tokenNumber() == 5) // + lex.tokenRead(); else if (lex.tokenNumber() == 6) { op= 6; lex.tokenRead(); } term(); if (op == 6) // negate push(-pop()); while (lex.tokenNumber() == 5 || lex.tokenNumber() == 6) { op= lex.tokenNumber(); lex.tokenRead(); term(); double f1, f2;

// -

The number developed in term will be on the stack top. We pop it, negate it, then push it back in.

Here's where we pop the two operands from the stack, form the sum or difference, then push the result. Notice they need to be popped in the opposite order of pushing. The second operand comes out of the stack first.
f2= pop(); f1= pop(); if (op == 5) push(f1+f2);

Chapter 8: Parsing with Syntax Diagrams, page 177

else push(f1-f2); } }

The term function is very similar to simple_expression above.


void factor(void) { if (lex.tokenNumber() == 4) {

// Real

The getDouble member function returns the double-precision float value associated with this token. We just push it in the stack
push(lex.nextToken().getDouble()); lex.tokenRead(); } else if (lex.tokenNumber() == 3) { // Integer

The getInteger member function returns a long integer, which we push in the stack.
push((double)lex.nextToken().getInteger()); lex.tokenRead(); }

The remainder of this function is unchanged.


else if (lex.tokenNumber() == 9) { // ( lex.tokenRead(); expression(); if (lex.tokenNumber() == 10) // ) lex.tokenRead(); else { cerr << "Syntax error, expecting )" << endl; exit(-1); } } else { cerr << "Syntax error, expecting number or (" << endl; exit(-1); } } int main(int argc, const char **argv) { progname= argv[0]; if (argc != 2) giveHelp("expecting a file name"); const char *fname= argv[1]; if (lex.open(fname)) { expression(); if (!lex.atEOF()) { cerr << "Syntax error, input not fully parsed" << endl; }

The final value should be in the stack. So we pop it and print the result
cout << "value= " << pop() << endl; lex.close(); } else giveHelp("unable to open file"); return 0; }

Chapter 8: Parsing with Syntax Diagrams, page 178

This function should return 16 for the value of the infix expression above, like this:
D:\research\vs15\lextest>calc infix1.in ; ((-17 + 49)/4 - 2*3)*(9-3+2) value= 16

If it doesn't, double-check the program.

A Single-Register Compiler for the 80x86 CPU


The 80x86 CPU supports 32-bit integer arithmetic. Let's see how to build a simple compiler for it, assuming that all variables and constants are 32-bit signed integers. You need to be reasonably familiar with the 80x86 architecture to follow this. See appendix 2 for a summary of the 80x86 instruction set. Well assume that all operations will be carried out in such a way as to leave their result in EAX. This will be a so-called single register arithmetic evaluator. With a more elaborate strategy, we could take advantage of all six registers available for arithmetic in this architecture (EAX, EBX, ECX, EDX, ESI, EDI). Now look at figure 3. We'll make sure that each of the functions expression, simple_expression, term and factor are designed so that they leave behind the value of these forms (at runtime) in register EAX. That is, they will generate target assembler code to do exactly that. This is an invariant for these four parsing functions. In the factor diagram, leaving the value in EAX is easy. If we find an unsigned_number, then that can be loaded into EAX through one instruction, like this:
mov EAX,constant

where constant is the number. If we see an identifier, it should represent a memory address containing a 32-bit integer. So getting it in EAX is also easy:
mov EAX,id

where id is the identifier name. The assembler will convert this into a memory reference instruction loading the value associated with the name. (We will assume that each identifier is suitably declared earlier in Masm format). In the simple_expression diagram, figure 3, we have several different operations to deal with. If there's a unary minus (the first minus sign), we need to capture that fact. Then after calling term, we will have some integer value in AX. It should be negated if a unary minus was seen, and that's done with the instruction
neg EAX

If one of the binary operators + or - is seen, we also need to capture that operator, then call term a second time. However, the second term call will overwrite the EAX value from the first term call. We need to save the first one, and we'll do that by pushing it in the runtime stack. So we need to issue the instruction
push EAX

when we see the + or - binary operator (the second one in the simple_expression diagram), but before we make the second term call.

Chapter 8: Parsing with Syntax Diagrams, page 179

We're now ready for the second term call; it will leave the second operand in EAX as usual. Now, just past the second term call, we need to emit instructions for the addition or subtraction. The situation is that the first operand is in the runtime CPU stack, and the second operand is in EAX. We need to leave the result in EAX. For an addition, that's easy, since addition is commutative. The two instructions
pop add ECX EAX,ECX

do exactly what we want. For a subtraction, the operands are in the wrong order. If we emit these instructions:
pop sub ECX EAX,ECX

then we end up with the wrong sign in EAX. (We've subtracted the first operand from the second one). But that can be fixed by emitting a negation instruction, like this:
neg EAX

Multiplication and division are similar to addition and subtraction, except that reversing the division is not so easy. That requires a slightly different approach, like this, for division:
mov ECX,EAX pop EAX cdq idiv ECX

We want the second operand (in EAX) to be the divisor, so we copy it to ECX. We can then bring back the first operand (in the stack) into EAX, expand it into a double-word integer, and perform the division using ECX as the divisor. As with addition/subtraction, we need to push EAX before the second factor call is made, since the left operand of the * or / is in EAX, and the second factor call will overwrite EAX. Here's the critical code. (Theres more in lextest/lextest.cpp which you will need to compile this).
Clexf lex; int labelValue= 0; int getNextLabel(void) { return labelValue++; } // prototypes for the functions void simple_expression(void); void term(void); void factor(void); void expression(void) { simple_expression(); } void simple_expression(void) { int op= 0;

Chapter 8: Parsing with Syntax Diagrams, page 180

if (lex.tokenNumber() == 5) // + lex.tokenRead(); else if (lex.tokenNumber() == 6) { op= 6; lex.tokenRead(); } term(); if (op == 6) cout << "\tneg\tEAX" << endl; while (lex.tokenNumber() == 5 || lex.tokenNumber() == 6) { op= lex.tokenNumber(); lex.tokenRead(); cout << "\tpush\tEAX" << endl; term(); cout << "\tpop\tECX" << endl; if (op == 5)

// -

Here's where the first instructions are generated. The "\tneg\tEAX" prints as TAB neg TAB EAX.

The following prints as TAB add TAB EAX,ECX, i.e. add ECX to EAX, leaving the result in EAX.
cout << "\tadd\tEAX,ECX" << endl; else { cout << "\tsub\tEAX,ECX" << endl; cout << "\tneg\tEAX" << endl; } } } void term(void) { int op= 0; factor(); while (lex.tokenNumber() == 7 || // * lex.tokenNumber() == 8) { // divide, / op= lex.tokenNumber(); lex.tokenRead(); cout << "\tpush\tEAX" << endl; factor(); if (op == 7) { cout << "\tpop\tECX" << endl; cout << "\timul\tECX" << endl; } else { cout << "\tmov\tECX,EAX" << endl; // divisor to ECX cout << "\tpop\tEAX" << endl; // return the dividend cout << "\tcdq" << endl; // expand it into EDX:EAX cout << "\tidiv\tECX" << endl; } } } void factor(void) { if (lex.tokenNumber() == 3) { // Integer constant cout << "\tmov\tEAX," << lex.nextToken().getStringValue() << endl; lex.tokenRead(); } else if (lex.tokenNumber() == 2) { // Identifier

Chapter 8: Parsing with Syntax Diagrams, page 181

cout << "\tmov\tEAX," << lex.nextToken().getStringValue() << endl; lex.tokenRead(); } else if (lex.tokenNumber() == 9) { // ( lex.tokenRead(); expression(); if (lex.tokenNumber() == 10) // ) lex.tokenRead(); else { cerr << "Syntax error, expecting )" << endl; exit(-1); } } else { cerr << "Syntax error, expecting number, identifier or (" << endl; exit(-1); } } int main(int argc, const char **argv) { progname= argv[0]; if (argc != 2) giveHelp("expecting a file name"); const char *fname= argv[1]; if (lex.open(fname)) { expression(); if (!lex.atEOF()) { cerr << "Syntax error, input not fully parsed" << endl; } cout << "; result is in EAX" << endl; lex.close(); } else giveHelp("unable to open file"); return 0; }

And here's this little compiler's output for a single expression containing all the operators with names and constants. The result is left in EAX after all calculations are done:
; ((-Able + mov neg push mov pop add push mov mov pop cdq idiv push mov 49)/Bill - 22*3)*(Sam-7+Fred) EAX,ABLE EAX EAX EAX,49 ECX EAX,ECX EAX EAX,BILL ECX,EAX EAX ECX EAX EAX,22

Chapter 8: Parsing with Syntax Diagrams, page 182

push mov pop imul pop sub neg push mov push mov pop sub neg push mov pop add pop imul ; result is

EAX EAX,3 ECX ECX ECX EAX,ECX EAX EAX EAX,SAM EAX EAX,7 ECX EAX,ECX EAX EAX EAX,FRED ECX EAX,ECX ECX ECX in EAX

Testing this program (or any of the others in this chapter) requires that you have a Masm-compatible assembler, a linker, and an Intel 80x86 (PC) platform. If you want to assemble this, you need to add some header and trailer material to this assembly fragment. Refer to assembler materials for specific instructions on your platform. If you have a Linux platform running on an 80x86, you need to change the generated instructions to conform to the AT&T syntax style, then assemble it under the Gnu AS assembler. See appendix 2 for more details and complete assembler program examples. This program doesn't seem to do anything when executed -- because there's no output! The output of the code fragment leaves a result in register EAX, but that disappears unless it's printed. It happens that (for most compilers), a C function call returns an integer result in EAX, so an easy thing to do is to embed the above code fragment in a procedure header and trailer, call it from a C program, and print the return value. (the C function that calls this should return a type int, which corresponds to a 32-bit integer in EAX).

Syntax Diagram Validity


Recursive descent is essentially an LL(1) parsing process, dressed up in the form of function calls. Consider: Each diagram is associated with a function. Its counterpart is one or more production rules. The purpose served by calling a function associated with a diagram is to cover some sequence of input tokens in a top-down fashion. We therefore expect that LL(1) theory can be used to validate a set of syntax diagrams. It can also be used to settle various questions that may arise while programming a set of syntax diagrams, e.g. At a fork in the diagram, which path should the function take? Are a given set of diagrams valid, or is there some ambiguity lurking in them? Where should syntax errors be reported? What can be said about the expected next tokens on an error? Chapter 8: Parsing with Syntax Diagrams, page 183

How might one recover from a syntax error?

Diagram Validity
These basic properties should be satisfied by a set of syntax diagrams: 1. Exactly one start node in each diagram. 2. At least one finish node in each diagram. 3. For every nonterminal in any diagram with the name N, there must be a diagram labelled N. 4. For every nonterminal or token element E in any diagram, there must be at least one directed path: a. from the start node to E, and b. from E to a finish node 5. Every diagram must be capable of deriving at least one terminal string. 6. The first sets just beyond each fork junction (a junction that subdivides into more than one path) must be pairwise disjoint. Properties 1-4 are easily established by inspection, unless the diagrams are constructed through some automated system, for example, syngraph, described in appendix 7. Properties 5 and 6 require some analysis.

Derivation of Terminal Strings, Algorithm C5


This can be established through an algorithm that requires that each node in each graph be decorated with a boolean flag, and, in addition, that each diagram be decorated with a boolean flag. Algorithm C5 requires two tags. One tag t is attached to each node in the set of diagrams. Another tag d is attached to each diagram.
Precondition: diagrams satisfy conditions 1-4. Set all d and t tags to false; Repeat { For (each diagram D) do { Set D.start.d = true; For (each node N in D such that N.t == true) do { For (each node P in D such that there is an edge from N to P) do { If (node P is a token node) Set P.t = true; Else if (node P is associated with a diagram g such that g.d == true) Set P.t = true; If (P.t && node P is the finish node of diagram D) Set D.d = true; } } } } until (no d tag has changed value && no t tag has changed value); If (each diagram D is such that D.d == true) then (each diagram can generate a terminal string);

The general thrust of algorithm C5 is to push through paths in each of the diagrams starting with the

Chapter 8: Parsing with Syntax Diagrams, page 184

start node. We permit a path to go through a terminal (token) node, but not through a nonterminal (diagram) node unless we have previously shown that that diagram can derive a terminal string. The d tag attached to a diagram certifies that that diagram can derive a terminal string. The node tags t are used to keep us from moving around circular paths indefinitely. They are also used to conclude that no further progress can be made anywhere in a particular graph. For example, suppose that we have a set of three diagrams A, B and C. When we search paths in A, we may find that every path from start to finish requires passing through a B or a C node. So we need to try to find a terminal path through B and/or C. If these are also blocked by A, B or C nodes, then we have to conclude that A cannot derive a terminal string. (Neither can B or C). However, if we find a path through C (say) from start to finish that doesn't require either of the other two, then we can say that C derives a terminal string. This fact can now be used to investigate the A paths. Perhaps A was previously blocked by C; it is now unblocked, so that perhaps a path exists from start through terminal nodes, through one or more C nodes, and finally to finish. Of course, we also need to determine that B can derive a terminal string. In any case, algorithm C5 will always finish in a finite number of steps, since we will eventually complete a pass through the diagrams such that no tag of either kind has changed value. When D.finish.t is set true for some diagram D, we know that a path exists from start to finish in that diagram that passes through either token nodes or nonterminal nodes that can derive terminal strings. We clearly want this property for every diagram in the set.

Example
Consider the graphs in figure 5 (below). Since we require a single start node, we'll assume that the two start nodes in graph X are the same, i.e. the graph opens with a choice between the upper and lower path. Graph X will clearly pass condition 4a and 4b. All nodes can clearly reach finish and be reached from start.

Chapter 8: Parsing with Syntax Diagrams, page 185

X
start a b c

Y
d

finish

start

++

Y Y

finish

+*-

+* Y

+ Y
start

Y
finish

*
h

X
j %

start

id
+ identifier

finish

+-

Figure 3. 5. Decorated Syntax Graphs Fig. Decorated syntax diagrams

Figure 5. 6. Decorated Syntax Graphs sets Fig Syntax Diagrams decorated with first

Graph Y will also pass condition 4a. All the nodes can be reached from start. However, graph Y fails 4b. Although nodes e, f and g can reach finish, nodes h and j cannot.

First Sets on Syntax Diagrams


We still aren't completely satisfied with our syntax diagrams. We can still have some problems with their design. Look at the diagrams in figure 6. There's still a problem with using these in a parsing program. It appears at the start node in the X diagram. While parsing, we need to decide on one of the three paths presented to us. In the upper path, we expect to see "+" under the read head. In the middle path, we expect to see "*". But what about the lower path? This says enter diagram Y. Under what conditions should we do that? Well, we need to examine diagram Y. This, too, presents a choice of two tokens, "+" or "-". But the "+" token also appears as one of the X diagram choices. So we have a parsing conflict, i.e. more than one way to proceed with the parsing. This could be resolved if we could look ahead more than one token. If we chose "+" from the X diagram, that's followed by a Y, and Y as we've seen expects "+" or "-". This means that the diagrams are not ambiguous, but that they merely have a parsing difficulty. If the diagrams were ambiguous, there would be more than one path through them that could derive the same sentence. If we chose "+" from the Y diagram instead, that will take us back into X and a return from X, which will presumably expect an end-of-file token. So we could in fact make this decision, given more than one token to peek at. We call "peeking at the next token" a one-token lookahead. If we have to peek at n tokens ahead of the current position, we have an n-token lookahead. For an n-token lookahead, we say that the lookahead set is all the possible strings of length n or less that are possible at this point in a parse. Lookaheads for n much greater than 1 become unmanageable, because the number of possible

Chapter 8: Parsing with Syntax Diagrams, page 186

lookahead sets tends to increase exponentially with n for typical syntax graphs. It turns out that Pascal and similar languages can be described with syntax graphs that require only a 1-token lookahead. So we'll not consider n greater than 1.

First Sets
We need a way of discussing this situation. And that calls for another definition. A first set is a set of tokens defined on a particular edge. It is the set consisting of the initial token of all the possible paths starting at the edge. In terms of our string derivation concepts, consider all the sentences that can be derived starting at some edge point in one of the diagrams. In general, each of these will be the tail of some sentences derivable from the start diagram. Then the first set at that edge point is the set of all the tokens appearing first in those sentences. Notice that the set of all the sentences is infinite in general. However, a first set is finite, because it is a subset of a finite set the set of terminals of the syntax diagram grammar. To see how this works, look at figure 6 again. What is the first set on the start edge in the X diagram? Well, clearly, "+" is in this set, since by following the upper path, we will have strings starting with that token. Also, "*" in in that set. So we have a first set consisting (so far) of {+, *}. Now moving into diagram Y is also a possibility. If we did that, we would see the tokens "+" and "-", followed by some other tokens. So "+" and "-" need to be added to the first set of the start edge of diagram Y. That exhausts all the possibilities, so we conclude that this first set is exactly {+, *, -}. Of course, the "+" token was contributed in two different ways, but we'll ignore that for the moment. Let's find some more first sets. The first set on the start edge of the Y diagram is clearly {+, -}. Now look at the edge from + to Y in the X diagram. Since we are (again) considering entering Y, and the first set on the start node of Y is {+, -}, it's clear that these two first sets are also {+, -}. This line of reasoning yields the decorated diagram in figure 7, which is the same as figure 4 with more of the first sets shown. In particular, we've noticed that the first set going into the identifier node is just the identifier itself, abbreviated as id. Note that each edge in the syntax diagrams will in general have a different first set. Also note that the whole set of syntax diagrams may be needed to work out a particular first set. Finally, note that while there may be an infinite number of possible paths through the diagrams, we are only interested in the initial token of each path, ignoring the remainder of the path. At most, a first set can contain all the tokens in these diagrams. At least, it may be empty.

first Set at the Finish Edge


An interesting question is whether we can define a first set on the finish edges of our diagrams. The answer is YES. What we need to work out is what tokens can possibly come next during a parse when we return from a diagram. That's a good definition of the first set on the finish edge. (Another name for that set is follow set, since it "follows" the current diagram.) Consider the goal diagram first. This is the diagram that we are expected to enter in order to parse a complete program. What token or tokens would you expect would come after parsing a complete program? Clearly, one of these should be an end of file. There are two ways of dealing with the end-of-file "token". We could write it into the goal diagram explicitly as the last token seen before exiting. In that

Chapter 8: Parsing with Syntax Diagrams, page 187

case, the goal's finish node would have an empty first set. But then the goal diagram could not be referred to anywhere else in the set of diagrams, because we don't want to see more than one end-of-file associated with a program. We'll take an alternative approach, and not write the EOF token into the diagram directly. But we will consider it to belong to the first set on the finish edge of the goal diagram. So that's one token on that finish edge. What other tokens can we discover? Well, we have to investigate how this diagram was used elsewhere in the diagram set. Suppose this diagram's name is X. We then need to find all the boxes named X throughout the set of diagrams. We can then assume that diagram X might have been entered (called) from any of those boxes. Now look at the first sets on the exit edge of each of those boxes. Their union represents all the tokens that could possibly be seen next upon exiting from diagram X. Let's apply this idea to figure 6. What's the first set on X.finish? We know that the end-of-file token belongs there; we'll call it $. There aren't any X boxes in either diagram, so that's all we get. What's the first set on Y.finish? The end-of-file token does not belong there automatically, since Y is not the goal diagram. However, there are three Y boxes in the diagrams. All three have an exit point that leads to finish.X. Since the first set at finish.X is { $ }, we conclude that { $ } also belongs on finish.Y. Its ironic that it appears anyway, but not for obvious reasons. We are now ready to describe an algorithm, C6, which will find all the first sets on all the edges of all the syntax diagrams in a set of diagrams. It will find them all at once, in a sense, rather than try to find them individually.

Algorithm C6: Find the first sets in a set of syntax diagrams


1. The first set for an edge directed into a token node consists of the set containing that token alone. 2. Consider any exit edge finish of the goal syntax diagram. The first set of finish will contain the stop or end-of-file token. 3. The first set for an edge fork E that splits into two or more edges E1, E2, ... consists of the union of the first sets of the split edges E1, E2, ... 4. The first set for an edge that enters a nonterminal node N is equivalent to the first set on the start node of Ns syntax diagram. This means that when youve found the first set at the start node of some diagram N, then you can copy that set into every in-directed edge of every N box in the diagram. 5. The first set for each finish edge of syntax diagram N will contain the union of the first sets of each edge directed out of the nonterminal boxes labelled N. This means that when youve found the first set on the out-edge of any box N, you can add that set (union-wise) to the first set on the finish node of the N diagram. 6. Repeat steps 3, 4 and 5, considering all edges in the diagram, until no more increases in any of the first sets occur. Let's now discuss each of these rules 1-6. Rule C6.1 should be obvious. If we are on an edge that leads directly to a token node, then the first set on that edge will that token. In Figure 6, the edge going into the + token will carry the first set {+}. Rule C6.2 just says that we are going to add the EOF token to the finish edge of the goal diagram. This follows the discussion we've given in the previous paragraph.

Chapter 8: Parsing with Syntax Diagrams, page 188

Rule C6.3 should also be obvious. If an edge forks into two more other edges, its first set has to be the union of those on its subsequent edges. Well discuss C6.4 and C6.5 below. The repeat rule C6.6 is very important, because (as we'll discover), when we increase the first set on some node, we often have to increase a first set somewhere else in the set of diagrams. This may have a ripple effect passing into other first sets. By repeating rules 3-5 until we can no longer increase any of the first sets anywhere, we make sure that we have found the maximum union sets. Step (6) is a transitive completion rule.

Rule C6.4 Illustrated


This rule was discussed earlier in conjunction with figure 6. The first set on the start edge of diagram Y is {+, -}, clearly, because of the two choices provided there. This first set can then be immediately copied to the input edge of every box in the diagrams labelled Y. That's the essence of this step. Now consider it generally see figure 7. The upper part of figure 7 represents part of some diagram containing a nonterminal node N. Edge Q leads directly into box N. The lower part of the figure represents the syntax diagram for the nonterminal N. Edge P is the start or entry edge of this diagram. Any diagram containing a nonterminal node N Q {t1, t2, t3} N

Diagram N First(P)= { t1, t2, t3} P: entry edge of N the rest of syntax diagram N

Figure 7. Step C6.4 illustrated. first set on edge P copied to Q We copy the first set {t1, t2, t3} found on edge P to edge Q. The general idea is that if we are on edge Q, we are about to enter the diagram N. Box N represents syntax diagram N; you can think of diagram N as an enlarged map of box N. Clearly, the tokens we see upon entering diagram N will be the ones we expect to see on the edge entering box N. A box like N may appear in several places in the set of diagrams. You need to find each of them and copy the diagram N's first set to each of their entry edges. If diagram N's first set should be expanded later, then that expansion should also be copied to each of the boxes N.

Rule C6.5 Illustrated


This rule describes how to determine the first set on any of the finish edges. As we've discussed above, this is done by finding each of the boxes that carry the name of this diagram. The first sets on the exit points of those boxes can then be joined into a composite first set on the finish edge. Rule C6.5 is illustrated in figure 8 in a general way. The upper part of the figure represents any

Chapter 8: Parsing with Syntax Diagrams, page 189

diagram containing a node N. The lower part of the figure represents the syntax diagram for the nonterminal N. Whatever first set that appears just after box N in the upper diagram can be copied into the exit edge of diagram N. By copy, we mean of course a union in the set sense. If some tokens are already in the first set on edge Q, they remain, and the new ones are merged into it. A nonterminal node N may appear in several places in the set of syntax diagrams. Form the union set of the tokens on their exit edges (like P), and transfer that union set to Q. If there's more than one exit Any diagram containing a reference to N First= {t1, t2, t3} N P Diagram N First= {t1, t2, t3} syntax diagram N Q: the exit edge of Nto Q. Figure 8. Rule C6.5 illustrated. first set on edge P copied edge in diagram N, the same union set should be copied to each of them. A rationale for this action goes something like this: Suppose we are about to exit diagram N. The question is, where did we come from? That is, how did we get into diagram N in the first place? The answer is that in general, we can't tell, but we could have entered diagram N through any of the N reference boxes. So by exiting from the N diagram, we will find ourselves exiting from one of the reference boxes labelled N. Since we don't know which one, we form the union set of all the sets on the exit edges of the reference boxes, and make that the first set on the exit edge of diagram N. We will therefore have formed all the possible tokens that we will see next upon emerging from diagram N. One exception to this rule exists, and that's if diagram N is the goal diagram. The goal diagram is a little different, because we may have entered it as the very first diagram. We therefore expect to see an end-of-file upon emerging from it. That's rule 2: place an EOF token on the finish edge of the goal diagram. EOF may appear on other finish edges thanks to rule 5, but that isn't always the case. It doesn't automatically appear on other finish edges. more diagram material

Transitive Completion
Finding the first sets requires applying rules 3 through 5 over and over again. You will discover that when the first set is augmented on some edge, that set will become involved in augmenting other edges. For example, in figure 7, we augmented the set on edge Q, which happens to be directed into nonterminal N. However, edge Q may be an exit edge of another nonterminal, which will cause rule 5 to be invoked. That change may require augmenting another set, and so forth. This is called the transitive completion of the first sets. Given some relation R between sets A and B, which we express as A R B, then R is said to be transitive if A R B and B R C implies A R C. In our first set algorithm, we discover that some edge A causes the first set on some edge B to be augmented. But edge B may affect the first set on edge C. So we need to propagate the change on edge A through to

Chapter 8: Parsing with Syntax Diagrams, page 190

edge C. When will this propagation of first sets end? In some diagrams, it may seem to go on forever. But of course, it can't. Since we are always forming a set union of tokens at the edges, we cannot continue to augment the first sets forever. In the worst possible case, the largest set that can be formed is the set of all tokens, which is a finite set. So the repetition phase has to end. It's just that there's no way to predict how many applications of rules 3, 4 and 5 are needed before the process stops. And it will stop when a complete pass through all the diagrams using rules 3, 4 and 5 fail to augment any first set. When doing this by hand, it's usually a good idea to follow the rules in the order given: 1, 2, 3, 4, 5, filling in all the first sets throughout the diagrams with one rule before moving on to the next. That won't guarantee finishing the sets in one pass through the diagrams and rules, but it will usually mean completion in two or three passes at most.

Finding the first Sets for the Expression Diagrams


Lets see how all this works for the expression diagrams in Figure 10. (You should make a drawing of the diagrams, to large scale, then fill in the first sets as suggested by the following diagrams). Start with the edges leading into token nodes. In simple_expression, these are the in-edges to the + and - nodes. See figure 9, which is the front end of the simple_expression diagram. The first(+, -) set is formed by using the union rule 3. We will have to first(+) augment this set later when we discover + what the first set is for the edge that first(+, -) leaves this part of the diagram, i.e. first first(???) (???). That's because there's a (by rule 3) "shortcut" path from the left to the right of the diagram. In fact, this will turn out first(-) to be the first set for the start edge of the (by rule 1) term diagram. Figure 9. first sets found in simple_expression Now the first set of the term start edge is the first set of the factor start edge. And this, as weve seen, is the set {identifier, number, ( }. So this set can be added to the in-edge of each of the diagrams containing a factor reference box, and also to those containing a term box. Now look at the term diagram. What is the first set for the finish edge in factor? Clearly, this contains the first set of the finish edge of term, which contains the first set of the finish edge of simple_expression plus + and -. The finish edge of simple_expression contains a stop token, assuming that expression is the goal syntax diagram. So that first set contains {+, -, stop}. Theres more. ) can follow expr, so its in the finish edge of simple_expression, causing it to be in our first set as well. The trick is to keep applying all the rules and always augment existing sets. An addition made to one first set may affect some other set elsewhere in the whole set of syntax diagrams, so you need to examine all the diagrams with respect to all three rules C6.3, C6.4 and C6.5. Eventually, the process must stop, when its no longer possible to union in any new members to any existing set. Figure 10 shows the first sets for the arithmetic expression grammar we examined earlier in this chapter. The symbol $ stands for the end-of-file token.

Chapter 8: Parsing with Syntax Diagrams, page 191

expression
+ - id num ( $)

simple_expression

simple_expression
+ - id num (

$) id num ( id num ( $)

term -

term
+-

+-$)

term
id num (

* factor /

id num (

+-$)

factor
*/

factor
id num (

id

Identifier
num

+-$)*/

+-$)*/

+-$)*/

Number
+ - id num ( ( ) +-$)*/

expression

Figure 10. Syntax diagrams for an arithmetic expression with first sets

Syntax Diagram Validity


Look at the simple_expression diagram, and the exit edge from the first term in it. The upper branch carries the first set { $, ) }. This is clearly distinct from the { +, - } set for the other branch, therefore theres no problem with this decision, based on the next token. We can therefore test the next token during parsing, and use that token with our signposts to decide on the upper or lower route. If the Chapter 8: Parsing with Syntax Diagrams, page 192

next token is in { $, ) }, we should take the upper path. If the next token is in the set { +, - }, we should take the lower branch. These two sets have no token in common, so the decision is unambiguous at this junction. We say that the token sets are pairwise disjoint. The word joint refers to a set intersection; the join of two sets is the same as the intersection, i.e. the set containing all elements in both sets. disjoint means the intersection is empty; there are no elements in common in the two sets. Finally, pairwise says that we must consider each possible pair of token sets. Since some decisions can branch more than two ways, we need to look at all possible pairs of tokens on the branch possibilities. If we have a 3-way branch, there are 3 possible pairs. If we have a 4-way branch, there are 6 possible pairs. In general, the number of pairs to consider is n*(n-1)/2, where n is the number of branches. Similarly, look at the term diagram, and the exit edge from the first factor in it. The upper branch carries { +, -, $, ) }, which is distinct from the { *, / } set required for the other branch. Since all the multiple branch edges have pairwise-disjoint first sets, we conclude that condition C5 is true for this diagram set. What about the overall validity of the syntax diagrams themselves? Is it possible that a decision cant be made at some node in every case? In fact, it is possible. If two edges exiting from some node carry one token (say +) in common, then some input sentence will demand that we make an impossible decision at that edge. We say that the edge's first sets are not pairwise disjoint. The parsing engine must know exactly which route to follow in every case. It would not know which direction to take if the next token were a +. It must be able to decide between the two routes, each of which carries a + token in its first set.

Ambiguity
A set of diagrams such that a particular sentence can be accepted in two different ways is said to be ambiguous. By different ways, we mean through different paths through the set of diagrams. When a set of diagrams is ambiguous, then there has to be at least one edge in which the first sets are not pairwise disjoint. That's easy to prove. Assume that the sentence exposing the ambiguity is t1 t2 t3 ... tn, where the tokens are ti. Then in general, the diagrams will accept some of the tokens along a common path, but at some particular token tk there has to be a parting of the ways in the diagram path. Otherwise the separate paths for the same sentence would not exist. But at that token and position in the diagrams, there must be a non-disjoint first set, in particular, a choice between two first sets, each of which contains token tk. So if a set of diagrams is ambiguous, then somewhere they must contain a non disjoint first set. This also implies that if our diagram set is free of non-disjoint first sets, then they must be unambiguous. It should be clear that any sentence accepted by an unambiguous diagram set can be accepted in exactly one way, i.e. with one and only one path through the diagram set. A syntax diagram set is unambiguous if, at every fork in the diagram, the first sets of the competing out-edges are pairwise disjoint.

Having non pairwise disjoint first sets is a necessary condition for a well-behaved diagram set. It is not sufficient. You must also ensure that the syntax diagram set also satisfies conditions C1 through C5, Chapter 8: Parsing with Syntax Diagrams, page 193

discussed earlier. Also, a diagram may contain a pair of nondisjoint first sets at some branch point without being ambiguous. We discussed this situation earlier, with regard to figure 5. Every ambiguous diagram set will have a non-disjoint branch point somewhere, however, the existence of a non-disjoint branch point does not imply the diagrams are ambiguous.

Empty FIRST Set


What if a first set on some edge is empty, i.e. contains no tokens? This will happen if a call cycle exists in the set of diagrams. Suppose that diagram A is directed into diagram B, which is directed into A B B A Fig. 11. Invalid diagram set. diagram A, as in figure 11: Then each of the edges into these diagrams will be emptythere are no tokens to associate with them. If this happens, the diagrams are invalid. To see why, try entering diagram A. By entering A, we must enter B, then enter A, then enter B, ad infinitum. There is no way to exit diagram A. (These diagrams also fail condition C5).

Using the first Sets in Parsing


The first sets are an invaluable aid to writing a parser for the language of the syntax diagrams. To see why, notice that for a set of diagrams to be valid, they must have first sets that are pairwise disjoint on every branching edge. These are the edges that concern us most when writing a parser. Such edges require that we peek at the next token, then make a branching decision based on the result. In short, which way should the program go next during the parse? We can now choose that edge for which the next token under the read token matches one of the first sets. Having the first sets along each of the possible paths, and also knowing that they are pairwise disjoint, is what makes this possible.

Discovering a Syntax Error


If the read-head token does not match any of the first set tokens, we can declare a syntax error. For example, look at figure 10, and consider the choice in the term diagram upon emerging from the leftmost term box. There are three possible paths here. The upper path carries the first set { +, -, $, ( }, and it leads to exiting from the diagram. (This first set in fact was obtained from the finish node first set). The other two paths take us into the token * or /, respectively. So at this point during a parse, we could inspect the token under the read head. If it's a * or /, we clearly move into the two token boxes. If it's in { +, -, $, ( }, then we should take the upper path. But if

Chapter 8: Parsing with Syntax Diagrams, page 194

it's any other token, we have a syntax error at this point. We were not able to conclude this without knowing the first sets.

Reporting a Syntax Error


If a syntax error is detected, it should of course be reported to the programmer. When you write your own recursive descent parser, it's up to you to design a syntax error reporting scheme. At the very least, you should print the offending line, and indicate where in the line the syntax error was noticed, like this:
>> a:= b+ -c * f; ^ Syntax error

This is a basic report. It should encourage the programmer to study the syntax diagrams or language definition a little more, and, of course, to repair the source file. This particular error is an easy one to make in Pascal, since it seems OK by algebraic rules. However, it is an error, as you'll discover if you trace the diagrams in figure 10 starting with expression and the token "b". (The "a :=" part calls for yet another diagram to specify more details for this assignment statement.) You could provide more assistance to the hapless programmer by giving a list of expected tokens. At the point of the syntax error, the token actually found (the "-") won't work, but we know a set of tokens that could work. These are, of course, the first set tokens. This error will likely be caught just after the "+" token in the simple_expression diagram. (The leftmost term will have scanned token "b" successfully. Notice that the only permissable tokens on entry to the rightmost term box are {id, num, (}, not "-". So you can arrange for your program to report a list of expected tokens by just printing the first set at the point of detecting the syntax error. After all, that's how you detected the error, isn't it? Your improved error report looks like this:
>> a:= b+ -c * f; ^ Syntax error, expecting Identifier, Number, (

Recovering from a Syntax Error


Once a syntax error is detected and reported, what should your program do next? There are three possibilities: Just quit. Call exit() and your parsing is done. It doesn't matter how deeply your parsing functions are nested; they and your runtime stack, etc. will be cleaned up by the operating system. Return from the function. Try to patch over the error in the hope of finding more syntax errors later. The first strategy is easy. It's a "cop-out", in street jargon. When you exit on the first syntax error, you are in fact expecting the programmer to try to fix the error, then run your parser again. This is not necessarily a bad strategy. Considering the speed of modern computers, parsing even a large program can be done in less than a second. If your development environment involves bringing up an editor when a syntax error is detected, positioned at the line containing the error, you've made life very easy for the programmer. Fix the error, hit a "compile" button and it's off to find the next error.

Chapter 8: Parsing with Syntax Diagrams, page 195

For a batch environment, or one in which compilation can take a long time, it's better to consider the second or third strategy. The second strategy is another "cop-out", one that will almost surely cause another syntax error report, and another, etc. But finding a better alternative is not easy, either. With the third strategy, we enter uncharted territory. There is no one good, foolproof, successful error recovery strategy possible. There have been many papers written on error recovery, yet even the best compilers can get hopelessly confused after discovering one innocent little error. Every programmer knows that. Yet many errors can be patched over successfully, provided that there's a reasonable error recovery strategy in place in the compiler. We can design one that works reasonably well in most cases. It goes like this, for our recursive descent framework: The syntax error will be discovered at some point within some syntax diagram D. We choose to skip tokens until a token is seen (peeked at) that belongs to the first set on the finish edge of D. Then return from function D. The EOF token should always be looked for, even if it doesn't appear on the finish first set. Finding an EOF while scanning for a token should always result in a return. You cant skip an EOF nothing follows it. It's a good idea to suppress error reporting (just ignore future errors) until at least three successive tokens can be successfully parsed. This is easy to do. Just set a counter to 3 when an error is discovered. If the counter was previously 0, we report the error, otherwise we don't report it. For each new token succesfully parsed, the counter is decremented. This strategy rests upon the assumption that upon returning from this function, we expect to see a token in the first set on the finish edge. But that's not guaranteed to yield a perfect parse from there on, since that set came from all the ways in which this diagram was called, and not this particular way. We just don't know which box in which diagram was called to invoke "this" function. So this strategy can fail on that basis. But at least, we aren't wasting our time with tokens that can't possibly be accepted upon a return. This strategy can fail in other ways as well. A successful recovery may require just exiting from one or more functions before continuing. Inserting a token may also result in a successful parse. Unfortunately, recursive descent does not permit following up on either of these better strategies. When your program is running a particular function, there's no easy way to tell how it was entered. (That information is buried in the runtime stack, which is difficult to extract from within a C or C++ environment). Furthermore, you can't really try out an exit from a function, in the sense of discovering whether that makes sense or not. If you exit from a C function, that's it. There's no easy way to get back in and try something else. You also cannot easily try out different recovery strategies, since these necessary entail calling other parsing functions, which will not be aware of your experimentation. A possible way out of this dilemma is under Unix is to fork a child task whose mission is to explore some strategy. Forking a task implies copying everything from the parent task, including the runtime stack, function calls, etc. If the child succeeds, it can return a code that so indicates its success, then the parent can continue with that particular strategy. Performance during error recovery is usually not an issue, since it wont happen for syntactically correct programs, and thats what most compiling is all about. Error recovery might also be limited by setting an alarm clock that just exits the task if error recovery is taking too long. Incidentally, using the second strategy (returning from the function without bothering to look at tokens) may work, but is likely to fail more often than the third strategy. In fact, the most likely result is

Chapter 8: Parsing with Syntax Diagrams, page 196

reporting more syntax errors and eventually returning from the main program.

Summary
In this chapter, we've looked at all of the following topics: We've seen that a set of syntax diagrams can represent a computer language. A syntax diagram is very much like a finite-state machine, except that it contains moves in which another diagram is invoked to continue the parsing. We've seen that a set of syntax diagrams usually can't be folded into a single FSM. An implementation it requires a push-down stack during parsing, which is provided by the runtime stack used for function calls. A set of syntax diagrams can be implemented as C functions, one function per diagram. We've looked at a set of diagrams that represent algebraic expressions, with the four arithmetic operators and parenthesizing. Such expressions are called infix, and are commonly used in programming languages for evaluations. We've implemented several infix expression translators: an infix-to-postfix translator, an infix calculator, and an infix to Intel CPU assembler. All four are minor variations on a basic parsing platform implemented from the expression syntax diagrams. Our implementation draws upon the lexgen scanner system developed previously, which reads an input file, skips comments, and collects tokens as support for our recursive-descent parser. Weve examined the question of syntax diagram validity by defining first sets on diagram edges. These are very similar to the first sets defined in chapter 7 indeed recursive descent is essentially LL(1) parsing dressed up in terms of function calls rather than table lookups. first sets are also useful during implementation of a syntax diagram set as a parsing program. They can also be used to aid syntax error analysis, error reporting, and error recovery.

Chapter 8: Parsing with Syntax Diagrams, page 197

Chapter 9: LR Bottom-up Parsing


W. A. Barrett, San Jose State University nch9.doc, vs. 2

Parsing as Tree Building


Recall (from chapter 4) that a bottom-up parser must construct the derivation tree for some sentence upward in the tree toward the goal node. We also want the sentence parsed from left to right. In terms of derivations and derivation ordering: A bottom-up parse is a right-most derivation performed in reverse.

By performed in reverse, we mean that the parser must work out the derivation steps in reverse order. The very first parsing step is the one that yields the terminal sentence. The last step in the parse is for the goal production rule:
G E

Also recall that each sentential form involved in the parsing operation has the form x, where is called the viable prefix, is called the handle, and x is called the open portion of the sentential form. The string is called the closed portion of the sentential form, although it is not, strictly speaking, closed. It will be reduced in later parse steps. The handle is involved in the next parse step, which involves some production rule X . The open portion x is a terminal string, and matches whats left of the input sentence. The string consists of some mixture of terminals and nonterminals, in general (it may also be empty or be entirely one or the other). It is not involved in this parsing step, but will be involved in some later parsing step. Any or all of these three strings may be empty. The very first sentential form is, of course, the terminal input string. The very last sentential form will consist of the goal nonterminal, G.

Chapter 9: LR Bottom-up Parsing, page 198

G yet to be constructed

T yet to be constructed

id

id

( E ) T F

unscanned part of input sentence

Figure 1. Bottom-up Parsing.


id scanned part of input sentence

Consider figure 1, (above) which is attempting to construct a derivation for


(id)+id*id

in the g0 grammar
G E E T T F F E E + T T T * F F ( E ) id

The Constructed Trees


You will notice in figure 1 above that three trees have been constructed. These are the roots of the three trees to the left of the figure, with the roots E, + and T, respectively. The leftmost E has a branch that extends down to cover
(id)

The next tree is the + node. We consider this to be a tree since it has a node and appears in the

Chapter 9: LR Bottom-up Parsing, page 199

derivation. The third T tree covers


id

in the input sentence. At the point in the parse shown in figure 1, we therefore have this sentential form:
E + T * id

You can discover where this appears by writing out the complete rightmost derivation of the sentence, as follows. The numbers on the right show the order in which the parsing occurs. The derivation, of course, starts with G and works downward, while the parser starts with the sentence, then works backward toward G:
G E (12) E + T E + T * F E + T * id E + F * id E + id * id T + id * id F + id * id ( E ) + id * id ( T ) + id * id ( F ) + id * id ( id ) + id * id (11) (10) (9) (8) (7) (6) (5) (4) (3) (2) (1)

The partially-completed tree shown in figure 1 corresponds to step (9) above. Weve managed (somehow) to work through steps (1) through (9), in that order. The parsing decision in step (9) is to discover the handle (underlined), along with the production rule associated with that handle (there may be more than one). In step (9), the handle is just the identifier id, and the production rule is F id. In step (10), the handle (shown underlined) is T*F, and the production rule is T T*F. As we discussed earlier, this appears to be a very difficult problem. It remained unsolved for about a decade after the first compiler was constructed. Then Knuth published his paper on LR translations [1] in 1965 that solved the problem, at least the critical part. A key part of Knuths proof was that the viable prefix during a parse forms a regular language. In other words, there is a finite-state machine M that can recognize the viable prefix in any sentential form. Furthermore, the machine M can be made to report a particular production rule upon discovering the viable prefix, and with that information, the handle can be deduced.

Viable Prefixes and Handles


Lets look at all the viable prefixes in the parse above, to get a better feel for the concept. Consider the sentence (step 1 above). Its viable prefix will be
( id

because the production rule involved in moving from step (1) to step (2) is
F id

Now that the sentential form is


( F ) + id * id (2)

Chapter 9: LR Bottom-up Parsing, page 200

the viable prefix will be


( F

since the next production rule to be applied is


T F

Following this idea, here are the derivation steps, with the viable prefix underlined in each step. Weve also listed the production involved in getting from step k to step k+1.
G E (12) E + T E + T * F E + T * id E + F * id E + id * id T + id * id F + id * id ( E ) + id * id ( T ) + id * id ( F ) + id * id ( id ) + id * id G -> E (11) E -> E+T (10) T -> T*F (9) T -> id (8) T -> F (7) F -> id (6) E -> T (5) T -> F (4) F -> (E) (3) E -> T (2) T -> F (1) F -> id

Chapter 9: LR Bottom-up Parsing, page 201

An LR(0) Finite State Machine


What does that finite machine look like? Rather than jumping into its construction just now (well do that later) , lets look at the one that Knuths method derives for our grammar g0, given below.

( 1 ( 2 Id 3 F: Id )

Id

E 3 F: Id + F 7

11

F:(E)

E 4 F

+ 5 8 see below

8 see below 9 (

T: F

EOF

T 6 see below

G: E EOF (halt) T 6 E: T 8 ( Id 2 see above F Id 3 F F: Id 13 T: T*F 3 F: Id * 10 2 see above

5 T:F

T 12 E: E+T

10

see above

Figure 1. Grammar g0 LR(0) machine

Figure 2. LR(0) machine

Heres how it works: state 1 is the start state. Use the machine to parse through a complete sentential form. (This can be the initial sentence). Just follow the arrows with the indicated tokens or nonterminals. Notice that it can parse through nonterminals as well as terminals. When you reach a state marked with a production rule, replace the handle with the rules left member nonterminal. For example, if in state 11, associated with the rule F (E), you have reached the end of a viable prefix that ends in the string ( E ) so replace (E) by F in the sentential form. This is called a reduce action.

Chapter 9: LR Bottom-up Parsing, page 202

Example
Let's see how this sentence would be parsed with this machine: a+b EOF. The start state is 1. That state requires that we scan the identifier a, then go to state 3. In state 3, we are supposed to reduce the identifier a with the production rule F Identifier. So our sentence becomes the sentential form
F + b EOF

Now start over with this sentential form at start state 1. We start over on the machine after each reduce operation, making sure that we replace the appropriate portion of the sentential form according to the production rule. We therefore see token F. State 1 says scan the F, and go to state 5. In state 5, we are to reduce the F to a T. So we now have the sentential form
T + b EOF

You will notice that the sentential forms and the production rules applied are deriving our sentence in reverse order, just as we expect for our bottom-up parsing method. Starting over, state 1 says scan the T and go to state 6. State 6 is a mixed state. We can either apply the production rule E T, or move to state 10. This is a conflict. We've worked out a resolution in this state, but let's pretend that we haven't. The shift operation (to state 10) clearly wont workit requires a * symbol, and we have a + here. So our only choice is another reduce, this time to an E. We now have the sentential form
E + b EOF

Starting over, state 1 says scan the E and go to state 4. State 4 says scan the + and go to state 8. State 8 says scan Identifier (b), then go to state 3. State 3 says reduce the Identifier with F Identifier. So now we have the sentential form
E + F EOF

...and so forth. You can work completely through the parse this way. The very last step will involve the sentential form
E EOF

which takes us from state 1 to state 4 to state 9. This involves production rule G E EOF, which has to be followed by a HALT.

Generating the Parser FSM


How on earth was this machine generated? Its a long story. And theres a theory that should be developed in order to prove that it performs as we claim (we wont do that). The theory is more fully developed in several books, in particular, Knuth [1], and Aho and Ullman [2]. We need to start with some definitions.

Item
An item is a marked production rule. The mark will always appear in the right member, just before it, just after it, or inside it somewhere. For example, if
E E + T

is a production rule, then


E . E + T E E + . T

Chapter 9: LR Bottom-up Parsing, page 203

E E + T .

are each items. The first item has its mark (a period) just before the E in the right member, and the last one just after the T in the right member. Since there are three symbols in the right members, there are four possible positions for the mark. When a production rule is marked at the end of the rule, like this:
E E + T .

we say that the item is completed.

Item Set
An item set is a set of items. An item set will be associated with a parser state, so well number the item sets (or states) as we construct them. The construction of an LR parser amounts to knowing just how to construct its item sets. Heres a typical item set as it is reported by Qparser:
state 2: 0 SHIFT 1 SHIFT 2 SHIFT 3* SHIFT 4 SHIFT 5 SHIFT 6 SHIFT 6: 7: 2: 6: 5: 3: 4: F F E F T E T : : : : : : : . . . ( . . . ( E ) => 2 Identifier => 3 E + T => 7 . E ) => 7 F => 5 T => 6 T * F => 6

Each state has one or more items. This one has seven, marked 0, 1, 2, ... , 6. The marked production rule just after the 3* SHIFT 6: fragment is
F : ( . E )

The mark is the period just after the left parenthesis. The asterisk "*" on item number 3 marks a kernel item set; we'll explain that later. The SHIFT indicates that this will be involved in a SHIFT action, but that's only if all the item sets are SHIFT-type. The arrow "=>" indicates a SHIFT transition to another state. You'll also find some item sets with a REDUCE mark. These have no transitions to another state, as we'll see. The ordering of the production rules within an item set is not important. Its also a true set in the sense that the items of any one set are pairwise distinct. The same production rule may appear several times within an item set, but the mark must appear in different places.

Constructing an LR State Machine


We're now ready to describe how to construct an LR state machine. Technically, this is an LR(0) construction. We'll extend it through a lookahead trick as needed to resolve any conflicts that appear.

The g0 Grammar
Here's the grammar that we'll use to illustrate constructing an LR parser:
G E E T T F F : : : : : : : E EOF E + T T T * F F ( E ) Identifier

In this, EOF represents the end-of-file character. Identifier represents any identifier; we'll call this Chapter 9: LR Bottom-up Parsing, page 204

id in some of the illustrations.

Starting the State Machine and the Initial State


We need to create a HALT state, which well assign number 0. This has no item sets in it, and it doesn't show up in a Qparser dump. We next create an initial state that contains a single item, the goal production with a mark just in front of its right member. This will be the initial state of the parsing engine. Call it state 1. Its item set looks like this so far:
state 1 G : . E EOF

This item set is the kernel of the final item set for this state. We need to add some more items to this, as explained next in a completion operation.

Completion Operation
Every item set must be completed by adding more items to it, through the following rule: For every item of the form
X : . Y

include every possible item of the form


Y : .

where Y is a production rule in the grammar.

1. 2. 3. 4.

Let's rephrase this: Look for a production rule with a mark followed by a nonterminal, e.g. Y Add all the production rules in which Y is the left member as items, with the mark in the leftmost position. Of course, we actually only add an item if it isn't already in the item set. Continue looking for production rules with a mark followed by a nonterminal, and repeat step 2 for each one. Lets do this for state 1. Since we have
G : . E EOF

we need to find all the production rules of the form E . There are two of them:
E E E + T T

We therefore must add the items


E : . E + T E : . T

to our item set. We've put the mark at the left end of the right member of each production rule. State 1 will now look like this:
state 1 G : . E EOF E : . E + T E : . T

Chapter 9: LR Bottom-up Parsing, page 205

But we arent through. Now we need to find all the production rules of the form T . There are two of them:
T T * F T F

This results in the expanded item set


state 1 G : . E EOF E : . E + T E : . T T : . T * F T : . F

We still arent through. We now need to find all the rules of the form F . There are two of them:
F ( E ) F Identifier

These complete state 1, which now looks like this:


state 1 G : . E EOF E : . E + T E : . T T : . T * F T : . F F : . ( E ) F : . Identifier

This item set is complete, since we can no longer generate any new and unique items using the completion rule.

The Goto Operation


New states are generated through the goto rule. Here, we look at symbols that follow the mark in some states item set. For example, in state 1, the first two items are such that E follows the mark. The next two items have T follow the mark. Sometimes, the mark is at the end of the production rule (well see how that happens shortly), in which case we say that the item is completed. (That isn't the case here). The symbol following the mark may be terminal or nonterminal. In state 1, for example, the last item has the terminal Identifier following its mark, whereas the first item has an E following the mark. An item set must be completed before applying the goto rule. We apply the goto rule to each subset of an item set for which a particular symbol follows the mark, as follows:

Chapter 9: LR Bottom-up Parsing, page 206

1. Consider some state S, which has been completed. 2. Identify a subset K of Ss item sets, such that the mark precedes a particular symbol X. (X can be terminal or nonterminal). That is, each item of the subset K has the form
Z : . X

where and are arbitrary strings of terminals and nonterminals, possibly empty. 3. Create a new state P for each such subset K. 4. For each item
Z : . X

in K, add the item


Z : X .

to the item sets in P. 5. In the state machine, let there be a state transition from state S to state P on symbol X. Lets rephrase this: 1. Look at a completed state S. 2. Find all the items that are not completed. (They have a marker followed by some terminal or nonterminal). 3. Consider all the non-completed items K that have the same symbol following the mark. Call that common symbol X. 4. Create a new state P associated with K 5. Make a shift transition from S to P on symbol X. 6. Copy all the non-completed items K to state P. 7. Move the mark on the items K in state P one place to the right, past symbol X. The items K newly placed in state P by this goto rule are said to be the kernel items. P must next be completed by the completion rule. Then another application of the goto rule is needed, and so forth until no new states can be formed. Two states with the same kernels are said to be equivalent. Two kernels are equivalent if they contain exactly the same items. During the construction process, we need to make sure that we arent duplicating a state that was already built previously, and we can do that by comparing their kernels. Let's perform the goto rule on state 1, given above. Each of its items has a symbol following the mark (none are complete). Two items have E following the mark:
G : . E EOF E : . E + T

We therefore create a new state with the mark moved past E, like this:
G : E . EOF E : E . + T

We can give this state any number we like, provided it doesn't conflict with some other state number. It happens that Qparser assigns 4 to this state, so we'll do that, too, to avoid confusion:
state 4 G : E . EOF E : E . + T

A completion operation on this state does nothing to it, since there are no nonterminals following a mark. Let's go back to state 1 and find another goto operation. There are two items with T following the Chapter 9: LR Bottom-up Parsing, page 207

mark:
E : . T T : . T * F

so we need to create a new state containing the items


E : T . T : T . * F

Again, we can choose any state number (other than 1 and 7, which we've used). Qparser happens to assign state 6 to this one, so we'll do that, too:
state 6 E : T . T : T . * F

The first item in this state is completed, since the mark appears at the end of the production rule. The second item is not completed. This results in a mixed state. We'll discuss the ramifications of mixed states later. Returning to state 1 again, we discover that three more new states can be formed using the goto rule on this state. One is on the nonterminal F, and the other two on the terminals ( and Identifier.

Significance of an Item set and State


It's difficult to appreciate this construction without studying the original Knuth paper or some other theoretical treatment of the problem. We haven't developed enough language theory to deal with that properly. But we'll try to make these ideas plausible, if not fully understandable. It should be clear that each item set becomes a state in the final LR parser. A state transition will occur on some token found in the item set, following a mark in an item. We can now state why the production rules appear in a particular item set, and why the mark is important: During a parse of some sentence, that production rule is among several that could be applied ultimately in a REDUCE action. The item set carries all the production rules that could possibly be involved at this point in the parse. The mark represents a boundary between the tokens in the parser's push-down stack and the tokens about to be read from the sentential form. For example, the item
T : T . * F

appearing in state 9 means that the rule T: T*F could appear later in a REDUCE action. It also means that the *F part is yet to be read. Since the mark is followed by "*", we expect that token "*" should be under the read head in order for this itemset to continue into the next state. When the mark appears at the end of a production rule, for example,
T : T * F .

it means that this production rule is ready to be reduced. The reading head of the sentential form has just moved past the handle of the form, indicating that this production rule is ready for a REDUCE action. If there are other items in this set, a conflict has arisen, since the other items will either represent competing reduce actions or a competing shift action. Its possible for both a reduce action and a shift action to be feasible, in general, so this mixed state situation is serious and must be dealt with. A discussion of mixed states and their resolution is given later.

Chapter 9: LR Bottom-up Parsing, page 208

The rules in an item set are provisional. In any particular sentence, only one of them actually will appear in a REDUCE action, and the others will be ignored, dropping by the wayside, so to speak. (The others will be involved in some other parse. The set of items forming a state represents all the possible rules that could be involved in this particular state). But once the state machine is constructed from these sets, that no longer matters. We've seen that the machine efficiently finds a bottom-up parse. So any complexity in construction of the parser doesn't matter. It should now also be clear that for a finite grammar, the number of states will also be finite. In other words, this construction process cannot go on indefinitely. Heres why not Each state is an item set There are a finite number of production rules, and each item in the item set is a production rule. There are a finite number of positions (the mark position) within each production rule, therefore a finite number of possible item sets. No two states may have precisely the same item sets. Therefore the number of states is finite, although for some difficult grammars, the number of states may be extremely large. It follows from the finiteness of this state machine that the viable prefix can be described by a regular expression.

State Machine Generation with Qparser


This generation process can be followed with Qparser by running nlr1 with the option -i, like this:
nlr1 -i g0 > g0.rpt

This will generate the following report in g0.rpt, to which weve added comments. The following is the initial state (1). The kernel item(s) are marked with an asterisk (*). The remaining items appear through the completion rule. Each arrow => is followed by another state number. These are states generated by the goto rule. Note that items 5 and 6 have a transfer to state 6, which contains the same production rules with the mark moved past the E. We therefore have a transition from state 1 to state 2 on symbol "(". We have a transition from state 1 to state 3 on Identifier, and so forth.
state 1: 0 SHIFT 1 SHIFT 2* SHIFT 3 SHIFT 4 SHIFT 5 SHIFT 6 SHIFT 6: 7: 1: 2: 5: 3: 4: F F G E T E T : : : : : : : . . . . . . . ( E ) => 2 Identifier => 3 E EOF => 4 E + T => 4 F => 5 T => 6 T * F => 6

State 2 is begun through the goto rule applied to state 1, item 0. This creates the kernel item 3 (marked with *). Since an E follows the mark, it must be completed, resulting in the other 6 items in this state.
state 2: 0 SHIFT 6: F : . ( E ) => 2 1 SHIFT 7: F : . Identifier => 3 2 SHIFT 2: E : . E + T => 7

Chapter 9: LR Bottom-up Parsing, page 209

3* 4 5 6

SHIFT SHIFT SHIFT SHIFT

6: 5: 3: 4:

F T E T

: : : :

( . . .

. F T T

E ) => 7 => 5 => 6 * F => 6

State 3 came from state 1, item 1. This is marked as a REDUCE state for reasons that we'll explain later.
state 3: 0* REDUCE 7: F : Identifier

State 4 came from state 1, items 2 and 3


state 4: 0* SHIFT 2: E : E . + T => 8 1* SHIFT 1: G : E . EOF => 9

State 5 came from state 1, item 4


state 5: 0* REDUCE 5: T : F

State 6 came from state 1, items 5 and 6


state 6: 0* REDUCE 3: E : T 1* SHIFT 4: T : T . * F => 10

State 7 came from state 2, items 2 and 3


state 7: 0* SHIFT 6: F : ( E . ) => 11 1* SHIFT 2: E : E . + T => 8

State 8 came from state 4, item 0


state 8: 0 SHIFT 1 SHIFT 2 SHIFT 3* SHIFT 4 SHIFT 6: 7: 5: 2: 4: F F T E T : : : : : . . . E . ( E ) => 2 Identifier => 3 F => 5 + . T => 12 T * F => 12

State 9 came from state 4, item 1. This is essentially the halt state. After it's reduced, there's nothing more to do.
state 9: (halt) 0* REDUCE 1: G : E EOF

State 10 came from state 6, item 1


state 10: 0 SHIFT 6: F : . ( E ) => 2 1 SHIFT 7: F : . Identifier => 3 2* SHIFT 4: T : T * . F => 13

State 11 came from state 7, item 0


state 11: 0* REDUCE 6: F : ( E )

State 12 came from state 8, items 3 and 4


state 12: 0* REDUCE 2: E : E + T 1* SHIFT 4: T : T . * F => 10

State 13 came from state 10, item 2


state 13: 0* REDUCE 4: T : T * F

You should notice that each of the items in the long list above have a target state. For example, state 10 was constructed from state 6. After completion, it contains two items that map to states that already have been constructed. Thus, in state 10, item 2 "goes to" state 2, and item 1 goes to state 3. We need to

Chapter 9: LR Bottom-up Parsing, page 210

be careful not to construct a new item set if there's one that exactly matches the one we need. That causes the state machine to contain a finite number of states. Notice that Qparser has marked each of the items SHIFT or REDUCE. A completed item (mark at the end of the production rule) is marked REDUCE, while a non-completed item is marked SHIFT. These correspond roughly to the SHIFT and REDUCE states of the finished parser. When a state contains only SHIFT items, it is considered a SHIFT state. Similarly, when a state contains a single REDUCE item, it is considered a REDUCE state. These naturally become SHIFT and REDUCE states in the final LR engine. The remaining states contain some mixture of SHIFT and REDUCE items, e.g. states 6 and 12.

Mixed States
When a state contains more than one REDUCE item, or a SHIFT and a REDUCE item, it is called a mixed state. (mixed-up may be closer to the truth!) A mixed state needs to be further resolved by a process that we wont describe here. The basic problem with a mixed state is that the parsing machine can't decide whether to perform a SHIFT or a REDUCE. That decision is clear if the state contains only SHIFT items, or a single REDUCE item. Otherwise, it isn't. Resolving a mixed state is sometimes possible. When a mixed state is resolved, a new LOOKAHEAD state is created. This kind of state takes another look at the next token in an effort to decide on the next action. The next action will be to one of two or more states, based on the old mixed state. For example, state 6 is a mixed state. There's a REDUCE item (0) and a SHIFT item (1). It turns out that we can resolve this by creating two new states, one containing the REDUCE item and another for the SHIFT item. We also create a new lookahead state, which replaces state 6. The lookahead state will (when parsing) examine the next token, then transfer to one of the two new states. For this grammar, in state 6, it happens that if the next token is "*", then the SHIFT action should be taken. Otherwise the REDUCE action should be taken.

Lookahead Resolution Methods


Certain grammars can be fully resolved into simple SHIFT/REDUCE states, with no mixed states. Most grammars have some mixed states, which require further resolution. There are several different resolution methods possible. They are too complicated to describe in this paper. Qparser uses a method due to DeRemer and Penello [3] called the LALR(1) resolution (LookAhead LR). All the methods depend on looking at the next token in the input sentence, then referring to a special table associated with a LOOKAHEAD state in order to choose an appropriate next state. For example, each of the states 6 and 12 will be split into SHIFT and REDUCE states, preceded (in the state machine) by a resolving LOOKAHEAD state. Some grammars cannot be resolved. The grammar may be ambiguous, which means that there are two or more fundamentally different parses of some sentence. The LR constructor will discover an ambiguity when it finds a mixed item set that can't be resolved. and print the offending item set. Its up to you as the grammar designer to figure out why the grammar is unresolvable (or ambiguous), and try to correct the problem. This can be very difficult.

State Renumbering
We need to caution the reader that Qparser will renumber the states after all the resolutions are completed. You can see this by comparing the machine listing acquired from option "-i" with the final Chapter 9: LR Bottom-up Parsing, page 211

one acquired from option "-M". It must renumber the states so that all the SHIFT states are grouped together, also the REDUCE and LOOKAHEAD states. This grouping is how the parsing engine figures out which is which without requiring a table to make that decision. Renumbering clearly makes it hard to track the LR generation operations. However, you can follow them all up to the very last state machine generation.

Optimizing the Parser


You will discover that some of the state transfers are on nonterminals. Yet there are none in the final state machine that Qparser produces. What happened to them? This is yet another story, an optimization, if you like. When we started over with the sentential form
E + F EOF

we have a strong sense of deju vu. Weve scanned the E + symbols previously (when we scanned E+b). We will be starting from the same start state. Well be parsing the same tokens, E and +. So when we reach the F symbol, we should be in the same state as we were previously when we reached the Identifier (b) symbol. Is it really necessary to rescan the front end of our sentential form each and every time? Clearly not. We can keep track of where we are by maintaining a stack of the symbols in the front end of the current sentential form. This will be the string preceding the handle in the viable prefix. It will be identified upon a REDUCE operation, since that operation identifies the handle, and the stuff preceding the handle will not have changed in this scan. This leads to an optimized parsing strategy, which can be described as follows: 1. Along with each symbol of the viable prefix, also push the next state number. 2. On a REDUCE, consult the state number just under the production rules right member. This is done by popping the stack by the number of right member tokens (the size of the handle), then looking at the state number on the stack top. 3. Use that state number and consult a goto table associated with the REDUCE state to determine the next state. (This is the tricky step).

Constructing the Goto Table


The third operation above implies that we need a goto table associated with each REDUCE operation. To discover the goto table, we need to look back through the state machine and find all the states that will spell out the right member of each production involved in a REDUCE state. These are the states that will appear on the stack after popping the right member. For example, consider the REDUCE state 3. This is associated with the rule F Identifier,so we need to find all the states that reach state 3 by spelling out the single token Identifier. These are states 1, 2, 8 and 10 (see figure 2). One of these has to be on the stack under Identifier when REDUCE state 3 comes up. Regarding state 1, suppose we reached state 3 from state 1. After the REDUCE, an F would reach state 5. So we add this transition to the goto table for REDUCE state 3:
1 => 5

Next suppose we got to 3 through state 2. After the REDUCE, an F would be in the sentential form

Chapter 9: LR Bottom-up Parsing, page 212

to be scanned in state 2. An F would then transfer to state 5. So our goto table will have to contain the entry
2 => 5

Next suppose we got to 3 from state 8 by spelling Identifier. After state 8, we'd have F on the stack top, and that goes to state 5. So we need to add this to the goto table:
8 => 5

Finally, if we got to state 3 through state 10, we'd next have a transfer on F to 13:
10 => 13

This completes the goto table for REDUCE state 3. A more interesting goto table is for state 11, which involves the rule F ( E ). Here, we need to identify all the states that spell out tokens (, E, ) then reach state 11. An inspection of figure 2 shows that these are states 1 and 2. After the REDUCE action, F is pushed, then scanned from one of these states. From state 1, we go to state 5 on F, and from state 2, we go to state 5 on F. Thus our goto table associated with state 11 is
1 => 5 2 => 5

By doing this everywhere, we eliminate all transfers on nonterminal tokens. These transitions can be dropped from the internal representation of the state machine. However, we can't eliminate the states that now seem to be dangling. For example, by removing the transition on E from 1 to 4, state 4 seems to be dangling. It isn't, really, since it will show up in some goto table. We also dont need to keep track of the viable prefix in the stack. Instead, we only need to carry states in the pushdown stack. This in fact is how Qparser is organized. Its pushdown stack holds only a state and a semantics pointer. By the way, we've fooled you into thinking that a token had to be pushed in the stack on a SHIFT or REDUCE action. We did that to make the LR operations a little easier to follow. But that really isn't necessary, is it? If you look at the SHIFT, REDUCE and LOOKAHEAD operations, you'll see that they make no use of the token in the stack, only the state. So the parser doesn't need tokens in the stack. Also, the parsing engine can decide which token is needed at any position in the parsing stack from the state that's there.

A Bottom-up LR Parsing Machine


Lets summarize our LR parser, with stack optimization, etc. A bottom-up parsing machine for a context-free grammar contains a finite-state control, a stack, and an input sentence with a read head. A diagram of the parsing machine is in figure 3. The LR finite-state control is a state machine. At any one time, it is in some state N. There are four different kinds of state, each kind with its own operations.

Chapter 9: LR Bottom-up Parsing, page 213

Input sentence push-down stack TOS s5 s4 s3 s2 s1 s0 t5 t4 t3 t2 t1 t0 LR finite-state control shift action reduce action lookahead action a0 a1 a2 a3 a4 a5 a6

Figure 3. LR Parsing Engine Figure 9. LR parsing engine. The pushdown stack contains pairs: { state, token }. These are always pushed as a pair and popped as a pair. You can think of these as two separate stacks, such that a push or pop is always done on each one, so that they always track each other. The token (tm, tm-1, ...) may be a terminal or a nonterminal. The state (sm, sm-1, ...) is always a number, and refers to a particular state in the state machine. The state machine has four kinds of action: halt, shift, reduce and lookahead. Theres only one halt stateit happens to always be state 0. The other states are distinguished by their state number theres a group starting with state 1 that are reduce states, then a group that are shift states, and finally (usually) a group that are lookahead states.

LR Machine State Actions


Heres a formal definition of the actions taken by the finite-state control for each of these kinds of state: HALT: the sentence is fully parsed. The read head should be just past the end of the sentence. Stop. SHIFT: Read the next token under the read head, then match it against a list of tokens given by the state. The match also determines the next state. Push {next state, next token} in the stack. Go to the next state. Heres a typical shift state description:
11 SHIFT ( ==> 12 Identifier ==> 1 Integer ==> 2

Chapter 9: LR Bottom-up Parsing, page 214

Real ==> 3 String ==> 4

This is state 11. If the next token is (, push { 12, ( } in the stack, then go to state 12. If the next token is an identifier, push { 1, Identifier} and go to state 1. If an integer, push { 2, Integer} and go to state 2, etc. Syntax Error: if the next token isnt in the states token list, theres a syntax error. For example, if the next token is +, there must be a syntax error, since + isnt listed in this state. LOOKAHEAD: Look at the next token under the read head, but dont advance the head. Consult the transfer table and go to the indicated state. Change the state on the stack top to the next state. Heres a typical lookahead state description:
19 LOOK ) ==> 6 + ==> 6 EOF ==> 6 * ==> 14

If the next token is *, the next state is 14. Otherwise the next state is 6. Change the stack top state to the next state: 14 or 6. REDUCE: A production rule is associated with the state. Its right member will be found in the stack top. For example, if the production rule is E E+T, then the top-of-stack token will be T, the one under that will be +, and the one under that will be E. (There may be more states and tokens beneath the E). The REDUCE action is the most complicated of them all: Perform any semantic operations required with this stack condition and production rule, pop the right members (may not be any, i.e. production could be P ), look at the stack top state TS, use the goto rules to find the next state, push the pair {next state, left member token}, transfer to the next state Heres a typical reduce state description:
8 REDUCE: F : ( E ) if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5

The three stack top elements will be (, E, and ), with the ) in the top-of-stack position. The action is therefore to pop the top three stack elements. The exposed stack top stack must be one of {17, 16, 12, 11}, according to the table in the state description. If its 17, then the next state is 10. If its 16, 12 or 11, the next state is 5. Before we transfer to the next state (assume its 5), we push the pair { 5, F }. Notice that this is how nonterminals find their way into the stack.

Chapter 9: LR Bottom-up Parsing, page 215

Writing a Qparser Grammar File


Weve described a Qparser grammar file and parser generation previously (see chapter 4). Here's a listing of g0.grm as a text file.
// g0.grm lexfile=../lib/c.lex classdefs= { Ctoken: EOF, Identifier, Integer, Real, String; } globals= { #include "eval.h" } G E E T T F F F F F : : : : : : : : : : E EOF // grammar g0 E + T T T * F F ( E ) Identifier Integer Real String

The LR State Machine


Here's a complete LR state machine for grammar g0.grm, as generated by the Qparser tool nlr1. To get this listing, you need to execute
nlr1 M g0

This line is in fact included in the makefile youve been using. You will find this state machine report in file g0.rpt.
LR State Machine Initial state: 11, initial stack state: 11 0 HALT 1 REDUCE: F : Identifier if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5 2 REDUCE: F : Integer if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5 3 REDUCE: F : Real if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5 4 REDUCE: F : String

Chapter 9: LR Bottom-up Parsing, page 216

if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5 5 REDUCE: T : F if 16 goto 20 if 12 goto 19 if 11 goto 19 6 REDUCE: E : T if 12 goto 15 if 11 goto 13 7 REDUCE: G : E EOF if 11 goto 0 8 REDUCE: F : ( E ) if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5 9 REDUCE: E : E + T if 12 goto 15 if 11 goto 13 10 REDUCE: T : T * F if 16 goto 20 if 12 goto 19 if 11 goto 19 11 SHIFT ( ==> 12 Identifier ==> 1 Integer ==> 2 Real ==> 3 String ==> 4 12 SHIFT ( ==> 12 Identifier ==> 1 Integer ==> 2 Real ==> 3 String ==> 4 13 SHIFT + ==> 16 EOF ==> 7 14 SHIFT * ==> 17 15 SHIFT ) ==> 8 + ==> 16 16 SHIFT ( ==> 12 Identifier ==> 1 Integer ==> 2 Real ==> 3 String ==> 4 17 SHIFT ( ==> 12 Identifier ==> 1 Integer ==> 2 Real ==> 3 String ==> 4

Chapter 9: LR Bottom-up Parsing, page 217

18 SHIFT * ==> 17 19 LOOK ) ==> 6 + ==> 6 EOF ==> 6 * ==> 14 20 LOOK ) ==> 9 + ==> 9 EOF ==> 9 * ==> 18

Notice that each of the four state types (HALT, REDUCE, SHIFT and LOOKAHEAD) appear in the state machine. The number in the left-hand column is a state number of the state. For example, the HALT state (theres only one) has state number 0. State number 1 is a REDUCE state. The last REDUCE state is 10, and the first SHIFT state is 11. The last SHIFT state is 18, the first LOOK state is 19, and the last LOOK state is 20. (The numbers are really arbitrary, but the states must of course be uniquely numbered). The initial state is given in the top line (11). The stack will initially contain the state 11. The token associated with this state on the stack can be anything. Read on, and well work through a parsing example in detail, using this LR machine.

An Example LR Parse
Lets work through a simple parse by hand, using the LR state machine above, and following each of the steps carefully. On each move, we need to keep track of (1) the current state, (2) the current stack contents, and (3) the current token position in the input sentence. From these, we can use the parser tables to deduce the next state, the (possibly changed) stack contents, and the (possibly changed) token position. All these operations can be viewed through the Qparser software, but its good to go through them at least once by hand, so that the mechanism can be thoroughly understood. Were going to build on this system, so please learn how it works!

The Initial State


The machine starts with the initial state 11, with state 11 on the stack. The read head is positioned on the leftmost token of the sentence. Lets choose a simple sentence, otherwise the state actions will run on for many pages: a + b $. (The $ symbol stands for the end-of-file EOF). You can easily confirm that this can be derived in the grammar G0, and you should work out the derivation to confirm that our machine in fact discovers it in reverse. So heres the initial configuration, expressed as a one-line table: State Stack (state, token) Remaining input sentence 11 shift 11 a+b$ The stack always contains a token-state pair, but the first token in the stack is never referenced, so well just use - for it.

The Next State


Whats the next configuration? In the state machine description, we find this entry for state 11: Chapter 9: LR Bottom-up Parsing, page 218

11 SHIFT ( ==> 12 Identifier ==> 1 Integer ==> 2 Real ==> 3 String ==> 4

which tells us that we must perform a SHIFT action. The current token is a, an Identifier, and thats in this token list. The next state (given the Identifier) is 1. So we push the {1, a} in the stack (actually the token corresponding to an identifier), and move the read head past it. The state machine configuration is now this: State Stack Remaining input sentence 1 reduce 11 +b$ 1 a Fa Notice how the stack now contains two token-state pairs, with {1, a} on top, and the a has been scanned.

The Next State


State 1 is a REDUCE state. The production rule is F Identifier, and we are of course gratified to find that an identifier (a) is indeed on the stack top, as it should be. The REDUCE action requires popping one element from the stack, since theres just one element in the production rules right member. Lets start by popping the stack: State Stack Remaining input sentence 1 reduce 11 +b$ Fa The stack top state is 11. Next, consult the goto table for state 1; we are looking for state 11 in the list:
1 REDUCE: F : Identifier if 17 goto 10 if 16 goto 5 if 12 goto 5 if 11 goto 5

and we discover that state 11 is in the list (it had better be!). The goto state is 5. We push the F token, along with the new state (5) and our parse now looks like this: State 5 reduce TF Stack 11 5 F Remaining input sentence +b$

Another REDUCE State


State 5 is another REDUCE state:
5 REDUCE: T : F if 16 goto 20 if 12 goto 19 if 11 goto 19

The action here is to pop one element from the stack, look at the state underneath (11), look up the 11 Chapter 9: LR Bottom-up Parsing, page 219

in the state list to get the next state (19), then push the pair {19, T}: State Stack Remaining input sentence 19 look 11 +b$ 19 T

A LOOKAHEAD State
State 19 is a LOOK state:
19 LOOK ) ==> 6 + ==> 6 EOF ==> 6 * ==> 14

Here, we compare the next token (+) in the input sentence with the tokens listed in the state, and find a match. (Its a syntax error if a matching token isnt in the LOOK list). Here we find the + token, and the next state should be 6. All that happens is state 19 changes to a 6: State 6 reduce ET Stack 11 6 T Remaining input sentence +b$

Another REDUCE State


State 6 is another REDUCE state:
6 REDUCE: E : T if 12 goto 15 if 11 goto 13

Following the same procedure as with other REDUCE state actions, we obtain the following configuration. Notice that the input sentence is not affected in a REDUCE state, nor do we care what the current token is. State Stack Remaining input sentence 13 shift 11 +b$ 13 E

Another SHIFT State


State 13 is another SHIFT state:
13 SHIFT + ==> 16 EOF ==> 7

This finally consumes the input token +. We also get another pair {16, +} pushed on the stack: State 16 shift Stack 11 13 E 16 + Remaining input sentence b$

Chapter 9: LR Bottom-up Parsing, page 220

The Remaining Steps


The remaining steps are given below. The student should work through each step carefully, following the operations described above, and the LR state machine. Stack Remaining input sentence $ 11 13 E 16 + 1 b $ 5 reduce 11 13 E TF 16 + 5 F $ 20 look 11 13 E 16 + 20 T $ 9 reduce 11 13 E EE+ 16 + T 9 T 13 shift 11 $ 13 E 7 reduce 11 empty 13 E GE$ 7 $ 0 halt 11 empty When state 0 is reached, the input list should be empty. This machine will look for an end-of-file token to reach its halt state. State 1 reduce Fb

Grammar Conflicts
As weve discussed above, a state that contains more than one completed item, or a mixture of completed and non-completed items results in a grammar conflict. We say that such a state is mixed. Without some additional work, the parser cannot determine whether to perform a shift or a reduce operation on a mixed state. Let's look at a grammar with an ambiguity to see how Qparser detects and reports a conflict. Here's the grammar (examples\g6.grm):
// g6.grm, an ambiguous grammar lexfile=../lib/c.lex classdefs={ Ctoken: EOF, Identifier; } globals= { #include "eval.h" }

Chapter 9: LR Bottom-up Parsing, page 221

G E E T T T F F

: : : : : : : :

E EOF // grammar g0 E + T T T * F T + F F ( E ) Identifier

The ambiguity will occur because we've added this production rule
T : T + F

This provides more than one way of getting a sequence of F objects separated by a "+" token. You can study the ambiguity by having Qparser report the item sets, like this:
nlr1 -i g6 > g6.rpt

This should generate the following report file, g6.rpt. As usual, we'll add comments to explain what it all means. These are the item sets generated as explained above. The item sets are similar to those for g0.grm, but of course, not exactly the same. Most of the states are not mixed.
state 1: 0 SHIFT 7: F : . ( E ) => 2 1 SHIFT 8: F : . Identifier => 3 2* SHIFT 1: G : . E EOF => 4 3 SHIFT 2: E : . E + T => 4 4 SHIFT 6: T : . F => 5 5 SHIFT 3: E : . T => 6 6 SHIFT 4: T : . T * F => 6 7 SHIFT 5: T : . T + F => 6 state 2: 0 SHIFT 7: F : . ( E ) => 2 1 SHIFT 8: F : . Identifier => 3 2 SHIFT 2: E : . E + T => 7 3* SHIFT 7: F : ( . E ) => 7 4 SHIFT 6: T : . F => 5 5 SHIFT 3: E : . T => 6 6 SHIFT 4: T : . T * F => 6 7 SHIFT 5: T : . T + F => 6 state 3: 0* REDUCE 8: F : Identifier state 4: 0* SHIFT 2: E : E . + T => 8 1* SHIFT 1: G : E . EOF => 9 state 5: 0* REDUCE 6: T : F Here's a mixed state. state 6: 0* REDUCE 3: E : T 1* SHIFT 4: T : T . * F => 10 2* SHIFT 5: T : T . + F => 11 state 7: 0* SHIFT 7: F : ( E . ) => 12 1* SHIFT 2: E : E . + T => 8 state 8: 0 SHIFT 7: F : . ( E ) => 2 1 SHIFT 8: F : . Identifier => 3

Chapter 9: LR Bottom-up Parsing, page 222

2 SHIFT 6: T : . F => 5 3* SHIFT 2: E : E + . T => 13 4 SHIFT 4: T : . T * F => 13 5 SHIFT 5: T : . T + F => 13 state 9: (halt) 0* REDUCE 1: G : E EOF state 10: 0 SHIFT 7: F : . ( E ) => 2 1 SHIFT 8: F : . Identifier => 3 2* SHIFT 4: T : T * . F => 14 state 11: 0 SHIFT 7: F : . ( E ) => 2 1 SHIFT 8: F : . Identifier => 3 2* SHIFT 5: T : T + . F => 15 state 12: 0* REDUCE 7: F : ( E )

Here's another mixed state. state 13: 0* REDUCE 2: E : E + T 1* SHIFT 4: T : T . * F => 10


2* SHIFT 5: T : T . + F => 11 state 14: 0* REDUCE 4: T : T * F state 15: 0* REDUCE 5: T : T + F

A LALR conflict is reported like this.


========LALR CONFLICT========

The token "+" (the only one in the set) is causing a lookahead conflict in resolving state 6 ...state 6 conflict is on the tokens {+ } The following are the items found in state 6. The set following each rule (in braces, {...} ) is a "lookahead" set, for example {) + EOF }. Essentially, this is a set of "next" tokens for which this operation can be taken deterministically. For a REDUCE action on the E: T rule, we should see one of the tokens ")", "+" or "EOF" next. For a SHIFT action on T: T*F, we should see token "*" next. For a SHIFT action on T: T+F, we should see token "+" next. If these sets are pairwise disjoint (like the first sets for the recursive descent parser, chapter 8), then there's no conflict, and this state can be resolved. However, token "+" appears in both the REDUCE action on E: T and in the SHIFT action on T: T+F. That's the conflict, and it will prevent making a deterministic parser from this grammar.
E : T . {) + EOF } T : T . * F { *} ==> 10 T : T . + F { +} ==> 11 ..................... Qparser complains about being unable to resolve this state properly, printing it again. AFTER precResolve - (not resolved)...state 6 E : T .

Chapter 9: LR Bottom-up Parsing, page 223

{) + EOF } T : T . * F { *} ==> 10 T : T . + F { +} ==> 11 .....................

Qparser takes an arbitrary action to resolve the state. Most of the time, we prefer a SHIFT (READ) action to a REDUCE action. By doing that here, we can resolve this conflict. That's done by removing token "+" from the lookahead list associated with E: T.
AFTER resolution- resolved by READ preferred to REDUCE (may not be good) ...state 6 E : T . {) EOF } T : T . * F { *} ==> 10 T : T . + F { +} ==> 11 ========LALR CONFLICT========

State 13, being mixed, also requires resolution. As with state 6, it can't be resolved except in a rather arbitrary way.
...state 13 conflict is on the tokens {+ } E : E + T . {) + EOF } T : T . * F { *} ==> 10 T : T . + F { +} ==> 11 ..................... AFTER precResolve - (not resolved)...state 13 E : E + T . {) + EOF } T : T . * F { *} ==> 10 T : T . + F { +} ==> 11 ..................... AFTER resolution- resolved by READ preferred to REDUCE (may not be good) ...state 13 E : E + T . {) EOF } T : T . * F { *} ==> 10 T : T . + F { +} ==> 11

This message warns you that although the grammar was resolved (it always is by Qparser), it may behave in an unexpected fashion.
=== 2 LALR conflicts resolved === PLEASE study the conflict report -grammar may not behave as you expect

Repairing a Conflict
Repairing a conflict requires that you first understand what is causing it. Is there a fundamental ambiguity in the grammar, or is the conflict due to a shortcoming of the one-token lookahead used by the resolver? The LALR system can't tell the difference, and you may have a hard time figuring it out.

Chapter 9: LR Bottom-up Parsing, page 224

By studying the production rules and the tokens causing the conflict, you should be able to work out the problem. You might also want to study the complete state machine, obtained through option "-i". Notice that it's only necessary to run nlr1. You don't need to run a complete make of the entire parser, since only this utility finds and resolves such ambiguities. Here are some other hints: Examine the multiple appearances of the conflicting token in your grammar. For example, if you get a conflict on semicolon (;), which is very common, see if there's an underlying ambiguity surrounding that token. Try stripping down the grammar a few rules at a time in an effort to preserve the conflict, but reduce the number of rules. nlr1 is very fast, so you can try it out repeatedly as you reduce the grammar. Eventually, you should get at the core of the problem. Alternatively, start with a very simple grammar that you know is free of conflicts, then add rules a few at a time, until you start seeing some conflicts. Look for nonterminals that are involved in both a left and right recursion. This is guaranteed to cause an ambiguity. Study grammars known to be free of conflicts, for example, the Pascal grammar given in directory pascal0. (Actually, it has one conflict, resolved through a special precedence resolver left=, as explained later). By starting with a conflict-free grammar, you can try removing and adding features to shape it into the language you want. Do something! Just staring at a conflict-ridden grammar won't solve the problem. You need to try different ways of expressing what you want in terms of production rules. Getting practice in rewriting rules is vital.

Disambiguating Rules
Another way to eliminate conflicts in an LR parser is to introduce disambiguating rules. Qparser provides this mechanism in order to provide some compatibility with yacc [4], which provides such rules. By using such rules, we can make nlr1 accept an ambiguous grammar, and have the resulting conflicts resolved cleanly. For example, grammar comp\comp.grm has the following production rules:
Expr : | Expr | Expr | Expr Expr + - Expr * Expr / Expr Expr #ADD #SUB #MPY #DIVIDE

Each of these four rules is both left-recursive and right-recursive, which makes the grammar ambiguous. We've also lumped all four operators into the same Expr rule base, so that there's no longer any precedence of * over +. Nevertheless, it's possible to warp the LR parsing engine into both resolving the ambiguity and providing the desired precedence. We do this by selectively changing certain of the SHIFT-REDUCE mixed-state transition tables. A set of disambiguating rules specifies a kind of precedence hierarchy for tokens that appear in a conflict. They are written into the grammar file, and must appear prior to any production rules. Heres a set that is used in the grammar comp.grm:
// Precedence relations used to disambiguate production rules

Chapter 9: LR Bottom-up Parsing, page 225

left= + left= * /

These say the following: The operators + and have equal precedence (listed on the same line), and are evaluated in left-toright order (the left keyword, which effectively makes any rule containing "+" or "-" left-recursive). If you use right instead, they will be evaluated in right-to-left order (the rules containing "+" or "-" are effectively made right-recursive). Operators * and / have higher precedence than + and (because the */ line follows the +- line). However, * and / have equal precedence and are evaluated in left-to-right order. The base grammar is ambiguous, and that yields lots of LALR conflicts. However, by adding the disambiguating rules given above to the grammar, all the conflicts are resolved silently. But please note: these disambiguating rules only become effective on a conflict in the underlying grammar. If there's no conflict, they are never used. They form a kind of background repair shop intended to fix things if they are broken (ambiguous). Unbroken stuff isn't sent to the repair shop. There are two advantages to using ambiguous production rules with disambiguating precedence rules: The number of rules in the grammar is smaller. Theres less need for single-production rules The number of states in the parser is smaller. That yields a slightly more efficient parser in space and time. Another example of disambiguation can be found in file pascal.grm, in directory pascal5. Here, the if-then-else statement causes a grammar ambiguity, which can be resolved with these disambiguating rules:
left= IF left= ELSE

These rules cause a quiet resolution of the LR conflict that would otherwise result. They also force an ELSE to be associated with the nearest IF, so that
IF b1 THEN IF b2 THEN s1 ELSE s2;

will be interpreted as
IF b1 THEN BEGIN IF b2 THEN s1 ELSE s2 END;

rather than
IF b1 THEN BEGIN IF b2 THEN s1 END ELSE s2;

Notice that these rules apply to any production rule that happens to contain one of the mentioned tokens, i.e. IF or ELSE. However, the disambiguating only affects those mixed states that can't be resolved in any other way. They provide a fallback strategy to invoke if "all else fails". So, another production rule using one of these tokens won't be affected by any disambiguating rule, unless the rule appears in an unresolvable conflict. Disambiguation also has no effect on any semantics operations. For example, semantics code for
Expr : Expr + Expr #ADD

can still be written using Expr.1, Expr.2, and Expr.3 to refer to these three nonterminals.

More about Parser Generation


Figure 4 illustrates the Qparser build process. A complete translator can be constructed from a suitably designed grammar file. We've illustrated the process for file calc.grm, but the figure applies to any grammar file.

Chapter 9: LR Bottom-up Parsing, page 226

Grammar File CALC.GRM Parser Generator nlr1 Optimizer opt Skeleton File PARS.SKC lr1p Source File PARS.CPP Other Source Files Source File CALCLEX.CPP Table File CALC.TBL Lexical Generator lextbl

C++ compiler

Input source 5 + 3*(8-2)

Executable File CALC.EXE

Translation 23

Figure 4. Qparser build process for a typical grammar, calc.grm.

The process begins with a grammar file, calc.grm. It may contain rules with semantics, i.e. decorated with a tag and C++ code enclosed in braces { . . . }. All these actions are carried out through a makefile and/or a project script. You'll find these in each translator directory: calc, comp, pascal1, pascal2, ..., pascal5. Here's a detailed discussion of the tools mentioned in figure 4, and their operations. Source code for all these tools can also be found in the complete version of Qparser, i.e. the Unix version. Some of these are written in C and draw on the special library directory libc, rather than lib.

Chapter 9: LR Bottom-up Parsing, page 227

nlr1
Utility nlr1 accepts the grammar file and does all of the following: Looks for syntax and semantic errors in the grammar file. If any are found, a report will be sent to stdout. Usually this is redirected to a report file, e.g. calc.rpt, so you need to look at that for problem reports. Picks up the lexfile, classdefs and globals statements. These are saved for later inclusion in the generated source files. Looks for LR conflicts. A new grammar may contain numerous conflicts that should be resolved before proceeding with semantics additions. Conflict reports are more fully described above. They require a deeper understanding of grammars and the LR machine to interpret. Conflict reports are sent to stdout. Looks for incomplete rules and dangling nonterminal. A dangling nonterminal is a noterminal that isn't connected to the goal nonterminal through some grammar rule path. Problem reports are sent to stdout. Identifies and collects the tokens and nonterminals of the grammar. nlr1 also distinguishes lexical terminals from ordinary ones. Performs a LALR(1) reduction of the grammar to a finite-state machine. This abstract machine and its generation is described above. Works through any semantics code, checks for matched quotes, parentheses and braces. Everything, including comments, is transferred to a target source code string, and connected to the source production rule. It looks for instances of nonterminals found both in the left and right members of the production rule and in the semantics string. These are translated into appropriate stack references in the target file. You can discover this translation by comparing a typical semantics sequence in the grammar with its transformation as found in file apply.cpp. Prepares a binary file calc.tbl that carries most of the LR machine information, along with other tables that describe the tokens, tags and the semantic actions. Returns a zero condition code if no serious problems were found, non-zero otherwise. This is tested in some of the makefiles, and, if so, will abort the build process. The optimizer, opt.exe (opt in Unix) works over certain tables generated by nlr1, looking for space compression optimizations. Several are possible, and it wasn't convenient to fold these optimizations into nlr1. This step could be skipped, but the resulting parser tables may be considerably larger without it. The table file, calc.tbl, is used by two different utilities, lr1p.exe and lextbl.exe (lr1p, lextbl in Unix).

lr1p
lr1p reads both the table file and a skeleton file, such as pars.skc. These skeleton files can be found in directory lib, and have a suffix skc when their target is a .cpp file, and suffix skh when their target is a .h file. Skeleton files are source files, and they mostly resemble their target files. lr1p mostly copies a skeleton file to its target, but during the copy phase, it looks for special operations enclosed in braces, like this:
{## . . . ##}

Within these braces, a Pascal-like language will be interpreted which has the effect of generating segments of source code, and drawing upon tables found in calc.tbl. lr1p is responsible for each of Chapter 9: LR Bottom-up Parsing, page 228

these actions: It generates the switch statement found in apply.cpp, function Cevpars::apply. This is how the semantics associated with the REDUCE action of a particular production rule are executed. Compare a typical apply.cpp with its skeleton file lib\apply.skc, and you'll see this service. It generates various array initializations, for the parser tables and the semType table. Most of the parsing tables appear in langtab.cpp. A smaller one is in apply.h. The skeleton files can be tailored to virtually any free-form host language. For example, we dont have to rely on C++ as the host language, but could port a compiler to java, Ada, perl or some other modern language. Of course, all the C++ supporting library functions (in directory lib) must also be rewritten in the alternative language. Also, the lexical generator lextbl must be adapted to suit the new host language. File lib\lexgen.cpp must be adapted to generate host source language, in particular. So the qparser tools could remain in C/C++, but they need to be adapted to generate the new host source code. A more complete discussion of lr1p is given later in this chapter.

lextbl
lextbl generates a lexical analyzer C++ source file. It's similar to lexfile, described in chapter 3, in that it accepts a description of the lexical tokens of the languages from the specified lexfile. It picks up the literal tokens from the tables produced by nlr1, ignoring any that appear in the lexfile. It then generates a reduced DFSM in the form of a C++ program. The name of the generated scanner source file is obtained from the grammar file's name. Since we are using grammar file calc.grm, lextbl will generate the file calclex.cpp, i.e. the grammar file name with "lex.cpp" appended to it. calclex.cpp will be just one component of your parser, of course.

Compiling and Linking the Translator


One source file is prepared by lextbl, e.g. calclex.cpp; this is the heart of the lexical analyzer. Several others are prepared by lr1p, e.g. pars.cpp and apply.cpp. Yet other source files are grammar-independent. Some of these are independent of any of your semantic operations, providing generic tools for your use in the final translator. The grammarindependent files are in directory lib. Grammar-dependent source files may also be incorporated through your translator work. In general, these will be functions that you've designed to interface with the semantics code. In the comp subdirectory, for example, you'll find that files eval.h, eval.cpp, and main.cpp have been extended to suit that particular translator. Any such new files should be built into your project and/or make files, of course, to facilitate development of your translator.

The Executable Translator


If these source files are complete and free of syntax errors, then your C++ compiler and linker should be capable of generating an executable file, which will carry the same name as your grammar file, e.g. calc.exe. This will be a parser, interpreter or translator, depending on how you've configured the grammar file. In figure 4, we've shown this executable operating on the sentence
5+3*(8-2)

to yield the 'translation" 23. Actually, calc.grm is designed to operate on a sequence of assignment statements, separated by line returns, so it can deal with identifiers as well. Your translator has some debugging tools built in. By running it with option "-d0", it will report each of the REDUCE action production rules involved in the parsing. Option "-d1" will show the Chapter 9: LR Bottom-up Parsing, page 229

parser stack for each REDUCE action. Option "-d2" will show the parser stack on each LR operation, SHIFT, REDUCE and LOOKAHEAD. Finally, option "-d3" will also show all details of an error recovery action. These obviously increase in verbosity, but will help you debug problems with a translator. If you use a capital D instead, the debugging information will spool out to stdout without any prompt requests. You can then redirect the report to a file for additional study.

Source Code Generation with lr1p


The program lr1p is a translator in its own right. It was written with an early version of Qparser with a special grammar and a special lexical analyzer. As we've explained above, lr1p essentially copies a source skeleton file to a source target file, but it looks for the special character combination {## in the source file during the copy process. When this is discovered, lr1p switches to a special mode in which all the subsequent source material is interpreted as a program. This interpretation phase ends with a subsequent sequence ##}. If the source file doesn't contain these break characters, then it's simply copied verbatim to the target file. Skeleton files are usually written as source code in the host language of the parser (i.e. C++), with certain breaks in which host source code will be generated according to the needs of the source language's grammar and parsing system. Heres a typical program sequence, as found in the skeleton file lib\langtab.skc:
{## var K, S, COUNT, UD, LEN, GENL_KIND: integer; {parser tables} begin genl_kind:= 21; indent:= 0; len:= 0; for k:=1 to term_toks do if (len < strlen(tokens[k])) then len := strlen(tokens[k]); writeln('short Ctable::Maxtoklen= ', len, ';'); writeln('short Ctable::Terminals= ', term_toks, ';'); writeln('short Ctable::EOF_CODE= ', eof_code, ';'); writeln('short Ctable::StartState= ', start_state, ';'); writeln('short Ctable::ReadState= ', readstate, ';'); writeln('short Ctable::LookState= ', lookstate, ';'); writeln('short Ctable::flagsDim= ', udim(flags), ';'); writeln('const short Ctable::statex[]= {'); indent:=2; write('/* 0*/ 0'); for k:=ldim(statex) to udim(statex) do begin if k mod 10 = 0 then begin writeln; write('/* ', k, '*/ '); end; write(',', statex[k]) end; indent:=0; writeln('}'); writeln; etc. end ##}

Chapter 9: LR Bottom-up Parsing, page 230

These segments are always in the form of a (simplified) Pascal program. It opens with optional var declarations, which can only be integer types. These are used as counters and array indices. Following the var is a begin end segment that encloses a program to execute. The Pascal-like language includes assignment statements, and whiledo, fordo, beginend, ifthenelse statements, which are sufficient to write little programs that generate source text. Function declarations and calls are not provided. String variables are not provided, although you can make use of string literals freely within the write/writeln statements. Source output text is generated by write and writeln statements, which work much as in Pascal, except that they will generate reasonably well formated source code. In particular, target lines are broken on token boundaries when more than 72 characters is written to a line, and each new line is prefixed with a series of blanks, forming a left margin. The specific margin is controlled by the statement
indent:=2

which causes a two-space margin to be applied to each new line. Most of the programming work has already been done by the parser, which provides a variety of predefined variables and arrays that describe the language features found in the grammar. The parser nlr1 generates a table file g0.tbl from a grammar g0.grm. This file is also required by lr1p, which reads it and sets up various arrays and other variables needed in its translation process. The segment above first generates a sequence of variable declarations. These variables are bound as static variables in class Ctable, and must be initialized outside the class definition. For example, the line
writeln('short Ctable::Maxtoklen= ', len, ';');

will generate the line


short Ctable::Maxtoklen= 25;

in the target file, assuming that the integer variable len turns out to be 25. The second block of code generates a table of numbers based on the statex array. This is one of several tables used in the LR parser.
writeln('const short Ctable::statex[]= {'); indent:=2; write('/* 0*/ 0'); for k:=ldim(statex) to udim(statex) do begin if k mod 10 = 0 then begin writeln; write('/* ', k, '*/ '); end; write(',', statex[k]); end indent:=0; writeln('}'); writeln;

The for loop runs through the integer range ldim(statex) .. udim(statex), inclusive. Its effect is to write a sequence of numbers drawn from the parser table statex. This table is of course provided in the grammar table file calc.tbl. The upper and lower limits of the various tables are returned through the special functions ldim and udim. To make the generated source more readable, we also generate index markers, and start a new line on multiples of 10. C-style comments are used for the new line comments. You'll find an opening and closing writeln that sets up the array as a const int array. Here's a typical table as generated by this

Chapter 9: LR Bottom-up Parsing, page 231

fragment and lr1p: (This is generated in qparser\calc):


const short Ctable::statex[]= { /* 0*/ 0,1,3,4,4,6,9,10,11,4 /* 10*/ ,9,1,20,10,4,21,21,21,21,21 /* 20*/ ,21,11,11,11,11,1,22,6,0,8 /* 30*/ ,15,20,22,28,30,35,44,8,46,48 /* 40*/ ,22,22,22,22,59,22,22,61,61,64 /* 50*/ ,30,22,30,70,72,75,78,81,83}

A few of the tables generated by lr1p must contain quoted strings. For these, function qstring provides an appropriate quoted string, that is, provide an opening and closing quote mark, and escape embedded quote marks correctly. Unfortunately, lr1p is locked into C or Pascal quoted strings and would require some tinkering to adapt to other string forms. It does pay attention to the language flag found in the table file, but only for these two languages. To trace this language dependency in lr1p, we suggest studying the source code in directory lr1p surrounding the identifier source_code, also the flags C_MODE and PASCAL_MODE. In any case, the following program fragment generates an array of quoted strings drawn from the table file, which defines the array tokens:
writeln('tokenNames[0]= "WhiteSpace"'); for k:=1 to all_toks do writeln('tokenNames[', k, ']= ', qstring(tokens[k]), ';'); writeln;

A more detailed description of the supported tables is given below. You usually won't need to know about them in a parser application, even if you choose to rewrite the Qparser tools for a different host language. The interested user can no doubt glean their definitions by examining the skeleton files and the parser internal code. The advantage of this approach is that by rewriting the skeleton file and other library files, the host language can be something other than C++, for example, Ada, java, Modula or Pascal.

LR Parser Table Organization


An LR parser is essentially a finite-state machine. The states can be given small numbers in a dense set, e.g. 0, 1, 2, 3, ... These will be partitioned into four subsets: State 1 will be the HALT state, The next state numbers 2, 3, ... will be REDUCE states, The next set will be SHIFT states, and The last set will be LOOKAHEAD states. Not all LR machines will have any LOOKAHEAD states, and only an empty machine will lack a REDUCE or SHIFT state. By grouping these together, we can quickly determine which of the four state types we are dealing with. Within each state, in general, there's a list of arbitrary length that determines some action. The actions for SHIFT and LOOKAHEAD are very similar: we need to look up a token number in a list associated with the state, then find its associated next state. For the REDUCE action, we need to look up a state number in a list associated with the state, then associate it with the next state. In addition, the REDUCE state must be associated with the number of elements to pop from the parser stack, and a target location for semantics actions. A space-efficient implementation is shown in figure 5. The notation agrees with the array

Chapter 9: LR Bottom-up Parsing, page 232

declarations you'll find generated by Qparser in file langtab.cpp. An operation begins by accessing the statex array with the state number N. This array covers all four kinds of state, however, the parser needs to separate the kinds in order to complete an operation. For a SHIFT operation, statex[N] carries an index into both the toknum array and the tostate array. Call this index S. Then toknum[S], toknum[S+1], toknum[S+2], etc. carry the token numbers (codes) of the tokens that are expected in this state. A valid token number will always be a non-zero positive integer, so this list can be terminated by a 0 token value, as suggested in figure 5. Reaching 0 without matching the next token implies a syntax error. However, if we find a token match in this list at index S, then the next state is given by tostate[S]. The LOOKAHEAD operation is essentially the same. statex[N] will carry an index in the same array, toknum, and it will be searched for a match with the next token as in the SHIFT case. However, we will associated the 0 token with "the most popular state", rather than declare a syntax error. This effectively defers reporting a syntax error until the next SHIFT move, in which it will most certainly be caught. The REDUCE operation involves first passing control to an appropriate semantics routine, if any. The map[N] array provides a reference to that routine, for state N. It in fact contains an enumerated type derived from the tag on the production rule. For example, the production rule
E : E + T #PLUS

will cause the name PLUS to be declared as an enumerated type. You can find this in file cskels\apply.h after you've built the calc.grm parser there. This symbol should end up as number 36 in the list, but the exact value is unimportant. You'll find map in file langtab.cpp. It carries the symbolic form of these enumerated types, which of course turn into integers when the parser is compiled. map[N] will carry the flag OTHER if there's no production tag defined, and that will cause a default action to occur. The semantics operations are actually carried out in function apply, found in file cskels\apply.cpp. You'll see that this function is called STATE with the state number N as a Reduce Shift Lookahead parameter. It makes use of statex map in a switch statement to select the associated popno semantics action. After the apply map operations are completed, REDUCE is expected to pop the parser stack by the number of symbols in the toknum 0 0 0 0 0 0 right member of the associated production rule. 0 0 tostate 0 For state N, this is given in table popno[N]. After these are popped, the state number stk_state 0 on the stack top is consulted 0 0 for the goto action. Here, statex[N] points to a list of stk_tostate states in the stk_state table. Each list is terminated by a 0, which, if reached, does Fig. 5. LR machine table organization not mean an error, rather Chapter 9: LR Bottom-up Parsing, page 233

"the most popular state". (State numbers can't be 0, incidentally). So, we will either find a match in stk_state, or reach 0. In either case, the next state will be in stk_tostate.

Other Tables
Several other tables are also generated through lr1p. Table prodx maps a state number to a production rule. The production rule is carried in prods[prodx[N]], prods[prodx[N]+1], ... as token codes. This is the entire production rule, including the left member in prods[prodx[N]]. As usual, a 0 marks the end of the production rule. A token code can be associated with the string value of the token, for both terminals and nonterminals. This is through the array tokenNames, also initialized in langtab.cpp. A matching list is also provided by the lexical analyzer, but only for the terminal tokens. The string values of the production rule tags is supplied by the flags array. This table is associated with a semType, and includes names for the supported tokens as well as the production rules. You can find a complete list of the semType definitions in apply.h. A partial list (for the tokens only) is in table.h. These tables are mostly used to generate stack dumps through the "-d" option available with your generated parser. For example, the REDUCE dump will display the production rule associated with that state, and that comes out of the prods, prodx and tokenNames arrays. REDUCE will also display the production tag, and that comes from the flags array, through map. (Given a reduce state S, flags[map[S]] yields the associated production rule string tag).

References
[1] D. E. Knuth, On the Translation of Languages from Left to Right, Inf. and Control 8(6), 607/639, 1965. [2] Aho, Ullman and Sethi, Compilers, Principles, Techniques and Tools, Addison-Wesley, publishers. [3] DeRemer and Pennello, Efficient Computation of LALR(1) Look-Ahead Sets, in the Transactions on Programming Languages and Systems, Vol. 4, No. 4, October 1989 [4] yacc, Yet Another Compiler-compiler, O'Reilly publishers.

Chapter 9: LR Bottom-up Parsing, page 234

Chapter 10: LR Parser Semantics


W. A. Barrett, San Jose State University nch10.doc, vs. 3.1

What are Semantics?


In chapter 4, we showed how to write a grammar, then use it to build a parser through the Qparser LR parsing system. We saw how the basic operations of SHIFT, REDUCE, LOOKAHEAD apply to the LR parsing algorithm to develop a right-most parse of any sentence, in reverse order. We also saw how each REDUCE operation is associated with a production rule in the grammar. The SHIFT operations are used to read input tokens, and to catch syntax errors. Finally, we showed how some simple semantics actions can be attached to certain production rules. In this chapter, we introduce more semantics operations, along with an example compiler.

Why the REDUCE Actions are Important


Heres why the REDUCE actions are important: Were mostly interested in the sequence of production rules that make up the sentence, and less so just how these rules are inferred from the input sentence. The production rule is known on each REDUCE action. The right member of the production rule can be found on the stack top upon each REDUCE action. See Fig. 1 for a diagram of the stack situation just before and just after a REDUCE action. BEFORE the reduce action, three states are on the top of the parser stack, corresponding to the members E, +, T in the right member of this production rule. AFTER the reduce action, these three members have been popped and replaced by the left member E of the production rule.

TOS

s4 = T s3 = + s2 = E s1 s0 right member of production rule


TOS

left nonterminal

s2 = E s1 s0

stack BEFORE reduce

stack AFTER reduce

Fig. 1. REDUCE action illustrated for production rule E E + T

Chapter 10: LR Parser Semantics, page 235

Augmented State Machine and Csem Class


To support semantic operations, were going to add a third stack to the LR engine, as shown in Fig. 2 below. The third stack (carrying pCm, pCm-1, etc. in the drawing) will carry pointers to C++ objects of type Csem.
a1 Csem Csembase Csem Cexpr a2 a3 a4 ... an

parser stack
pCm pCm-1 pCm-2 ... sm sm-1 sm-2 ... s0 tm tm-1 tm-2 ... t0

Input sentence

LR finite-state control

Output

Csem Ctoken

pC0

action

goto

Fig. 2. Augmented state machine Csem is an abstract class. Heres the Csem definition, which can be found in the file lib/sem.h:
class Csem { protected: short semt; // will be a semType enumerated type public: Csem(short st) : semt(st) {} Csem(const Csem &cs) : semt(cs.semt) {} short getsemType(void) const {return semt;} void setsemType(short tt) { semt= tt; } virtual bool isToken() const {return classCode() == CTOKEN;} virtual void dump(ostream& out) const=0; virtual int classCode(void) const= 0; };

Since this is an abstract class, it cant be instantiated, except as part of a non-abstract class derived from it. Also, each derived class must supply a dump member function and a classCode member function. The dump function is used during parser debugging. The classCode function can be used to check the exact derived class this Csem object is part of, in case that has to be decided at runtime. Otherwise, the Csem class provides a single tag, of type short, which specifies one of many possible objects found in the parser stack. The LR finite-state control program manipulates the semantics stack in parallel with the state and token stack. For example, on a SHIFT action, a new Csem pointer will be pushed on the stack to correspond to the new state and token. On a REDUCE action, the Csem stack will be popped and pushed, corresponding to the usual actions.

Chapter 10: LR Parser Semantics, page 236

The classCode Function


This distinguishes the derived classes of Csem, i.e. Ctoken vs. Capplication. For example, in the Ctoken class, classCode returns CTOKEN (1). classCode is a virtual function, so that each derived class can determine its own identity as a Csem-derived object.

The semType Classifier Tag


semType is a more precise classification of Csem objects. It describes the Ctoken object with predefined codes. Some of these are related to the Ctoken objects. The remaining codes are taken from the production rule tags. Here's an example. This can be found in file semtype.h, which is generated by the Qparser system, based on the skeleton file semtype.skh. The codes from OTHER through GENL_KIND are related to the tokens. The remaining codes ADD through SUB are drawn from the production rule tags. This was generated from grammar calc\calc.grm.
typedef short semtType; typedef enum semType { /* 0*/ OTHER= 0, ERROR, IDENT, // identifier RESWORD, // reserved word CHAR, /* 5*/ UCHAR, SHORT, USHORT, INTEGER, UINTEGER, // fixed-point numbers /* 10*/ LONGINT, ULONG, FLOAT, // floating-point numbers DOUBLE, CHARACTER, // quoted character /* 15*/ STRING, // quoted string SPECIAL, // special token EOLTOKEN, // end of line token EOFTOKEN, // end of file token DEBUG, // debugger token /* 20*/ GENL_KIND, // default Csem object // the following are from the application grammar ADD, ASSIGN, DIV, EXPR, /* 25*/ ID, INT, MPY, PAREN, REAL, /* 30*/ SUB, };

SHIFT Action
Recall that in a SHIFT action, a token is read from the input sentence. This action must push a Csem pointer, the new state number, and the token on the stack.

Chapter 10: LR Parser Semantics, page 237

This clearly involves the lexical analyzer, discussed in chapters 2 and 3. Two kinds of input token are handled by the lexical analyzersimple token and lexical token. A simple token stands for itself, for example, :=, +, *, /, etc. A lexical token stands for a set of input sequences, as defined in a .lex file. For example, an Identifier is a lexical token. It stands for any input sequence starting with a letter and continuing with a letter, digit, or underbar. When a simple token is scanned, a NULL pointer value (0) is pushed on the stack. When a lexical token is scanned, a pointer to an object of type Ctoken is pushed. Ctoken is a derived class of Csem, and it describes the lexical token completely, as well see. In Qparser, the lexical tokens Identifier, Real, Integer, String, and Character are all predefined by the lexical analyzer and require no special parsing treatment in the grammar, if you use one of the predefined lex files. You can use one of these special names in the grammar wherever necessary. The system must be alerted to the use of one of these names by including the name in a lexfile statement, like this:
lexfile=../lib/c.lex

The Ctoken class describes these through a semType flag and a union set. It and the lexical analyzer are described in detail in chapter 3. Suppose that the lexical analyzer scans a real number in the parser time input stream, i.e. one with a decimal place or exponent part. It will create a Ctoken object that carries the double-precision value of the number in its dvalue data slot, and a copy of that object will be found in the parser stack, associated with that lexical token name. For example, the production rule
F : Real

when reduced during parsing, will place a Ctoken object containing a floating-point number on the top of the parser semantics stack. You can then refer to that object through the symbol Real in a semantics code fragment, as explained later. The Ctoken object was of course generated by the lexical analyzer, which scanned the associated source string and then called the associated lexical function for a Real to grind out the Ctoken contents. The semt will be DOUBLE or FLOAT, depending on the form of the real number. You can fetch the value as a double-precision number with the member function getDouble(), or as a string through getStringValue(). The semType definition will be found in table.h, after a parser has been generated. Its an enumerated type, and contains specifications for all the possible lexical objects, as well as production rule tags. When a Ctoken object is found on the stack, its nature can be determined by checking the semt value through the Csem member function getsemType(). If this is DOUBLE or FLOAT, then we can fetch the value of the number through the member function getDouble(). The table below lists all the possible Ctoken options currently supported by the Qparser lexical analyzer as found in the files lib/c.lex. This default lexical analyzer supports all the C lexical tokens, as suggested in the table. Token name as used in grammar Identifier Real Semt value(s) IDENT FLOAT Example tokens Ed15 one_two_three 15.6 Member functions that fetch the value getStringValue() getStringValue()

Chapter 10: LR Parser Semantics, page 238

DOUBLE Integer CHAR, UCHAR, SHORT, USHORT, INTEGER, UINTEGER, LONGINT, ULONG STRING CHARACTER EOFTOKEN EOLTOKEN

22E-5 35.7F-6 175 3218723948 17L 0xFFa6 A C string\n % end of file end of line

getDouble() getStringValue() getInteger()

String Character EOF EOL

getStringValue() getStringValue() getInteger() none none

The lexical file lib/pascal.lex provides the same lexical token names, but with a different set of lexical functions, as follows: Pascal identifiers are identical to C identifiers. Pascal integers are limited to decimal forms only: no hex or octal forms are recognized. Pascal floats are like C floats, except that only the E exponent form is accepted. Pascal strings must be quoted with single quotes, like this:
'here''s a string'

and a single quote must be duplicated. There are no back-slashed forms as in C. A Pascal Character is the same as a Pascal String, except that it contains just one character.

REDUCE Action
On a REDUCE action, we expect to be able to perform an arbitrary operation associated with the production rule. In general, the parser system will execute a segment of C++ code written with the production rule in the grammar. We need to explain how to introduce code in the grammar, and how to make it refer to the Csem objects whose pointers appear in the parser stack.

Production Rule Tags and Semantic Code


As weve explained in chapter 4, a production rule is written in a grammar file like this:
E : E + T

At least one space must appear between the terminals, nonterminals and any special characters such as ":" and "+". The spaces are used to distinguish the tokens. If you write
E+T

(without spaces), then the lexical analyzer will assume that "E+T" is a token. Also, be aware that the leftmost E must be in the first column of a new line. Keep all of the right member material, including embedded C++ code fragments, out of the first column. Adding semantics to a rule requires adding a tag and a sequence of C++ code. Heres how that will appear in the grammar file:
E : E + T #ADD // the tag

Chapter 10: LR Parser Semantics, page 239

{cout << "E is " << E.2->getDouble(); }

// the code

Notice that the tag and C++ code go just after the production rule, but before the terminating semicolon. Also, the production, tag and code fragment can be spread into several lines, provided that you don't use the first column for anything but the leftmost E of the production rule. The tag (ADD) is a name that identifies this production rule. This name will later appear in the semType enumerated type list found in file table.h. It therefore must be compatible with a C identifier. The name is preceded by a # character in the grammar file, and its placed just after the production rule, but before the code and the terminating semicolon. Do not use the same tag for more than one production rule. Youll discover whether the name youve chosen conflicts with some other name when the parser is compiled. NO space or line feed must appear between # and the tag name. Youve discovered that the tag can be omitted, and thats OK. But the parser generator will warn you about this. If you add semantics, we strongly recommend that you make up a reasonable tag name, not let the generator assign one for you. Following the tag is a segment of C++ code, as illustrated above. This must be enclosed in a pair of braces {}. You can write any reasonable amount of C++ code in the braces, being careful that all inner braces are balanced, and all quoted strings are closed. This C++ code fragment will reappear within a generated member function apply of class Cpars. It will be executed just before the parser stack is popped on a REDUCE action involving this production rule.

Referring to the Parser Stack Objects


The parser stack carries pointers to objects of type Csem. You can refer to these objects by using the name of the production nonterminal or terminal symbol, like this, which we used in the example above:
E.2->getDouble()

Here, the E.2 is technically the name of the second nonterminal E in the grammar, not a C++ object pointer. What does E.2 have to do with a C++ object? Easythe Qparser system will convert E.2 into a parser stack reference that is a C++ object pointer. Look at the example again. The production rule is E E+T. When this rule appears in a REDUCE action, the parser stack contains pointer slots for the three right-most members of the rule. A slot for the T appears on the top of the stack, a slot for + just under it, and a slot for the E just under that. Now the parser inherits stack functions from class Cstack, defined in file lib/stack.h. Among these is a stack reference function stackRef(int pos), which returns the Csem pointer found at the position pos from the stack top. For example, the stack top pointer (T) is returned by stackRef(0), the pointer associated with the + is returned by stackRef(1), and the pointer associated with the E is returned by stackRef(2). Therefore, the Csem pointer associated with E in the stack is returned by stackRef(2). To make this referencing easy to use and robust, Qparser changes every nonterminal name reference in the code to an appropriate stackRef call. That happens during the parser building process, thanks to a conversion utility called lr1p, which works with grammar file information and a skeleton file to produce a compilable C++ file. You can view these expanded forms in file pars.cpp, in function apply, after the parser is built.

Chapter 10: LR Parser Semantics, page 240

The classdefs Declaration


The grammar usually needs another declaration, called a classdefs declaration. This provides the necessary link between a nonterminal (appearing the grammar) and a class that the nonterminal will be associated with. In the case of our lexical terminals, the class will always be Ctoken. We'll see later on how we can introduce our own class objects, associated with nonterminals. Heres what this declaration looks like in the grammar:
classdefs= { Ctoken: Identifier, Real, Integer, String, EOF; }

This states that the nonterminals Identifier, Real, Integer, String and EOF will be associated with a parse-time object of class Ctoken. Well see that other nonterminals can be associated with userdefined derived classes of Csem in a way that makes it easy to create new parsing application of any kind. We're going to add more class definitions to this as we build up more interesting grammars.

The globals Declaration


We usually need a declaration near the beginning of a grammar that looks like this:
globals= { #include "eval.h" }

This requires some explanation. When Qparser builds a set of source files, any semantic code that's attached to the production rules is essentially copied into a newly-generated file apply.cpp. (There are a few transformations made to it). They will find their way into the function Cevpars::apply(void). You can find this function in apply.cpp after you've built a parser from a grammar. This function requires a class definition for Cevpars that's in file eval.h. So we need a special include directive in file apply.cpp, and that's supplied by this directive in the grammar file. It causes the text between the { and the } to appear at the global level near the top of file apply.cpp. In this case, the line
#include "eval.h"

will appear in the generated file apply.cpp. (The left and right braces are not copied to apply.cpp. However, you can use any number of balanced braces, parentheses and brackets freely within this code fragment). Other grammars in this chapter contain more global material.

No Semantics: Grammar g1
Lets work through some simple examples. Heres a basic grammar that accepts a sequence of mixed numbers, identifiers and strings as input, with no semantics information attached:
// g1.grm lexfile=../lib/c.lex classdefs={ Ctoken: EOF, Identifier, Real, Integer, String; } globals= {

Chapter 10: LR Parser Semantics, page 241

#include "eval.h" } G S S E E E E : : : : : : : S EOF S E E Identifier Real Integer String

Note that S stands for some sequence of one or more Es. Each E is an identifier, number or string. This grammar file can be compiled into a parser program by following the procedure given near the end of chapter 4. You can use the same process for each of the grammar files developed in this chapter. We'll assume that you've built this grammar into a parser in what follows. (Do it NOW!) You have built a parser, haven't you? Notice that this grammar only parses sentences. It doesn't provide any semantic operations to report numbers, etc. However, as we've explained in chapter 4, you can test your parser by supplying it with any sequence of numbers, identifiers or strings. Any special characters, like "+" or "*", will be reported as lexical errors. (They aren't tokens). Otherwise, you'll discover that any sequence of legal tokens is acceptable. So here's a trial run with program g1.exe, generated from grammar g1.grm. The Italics indicate what you type in. Since this is interactive, you need to supply an end-of-file with control-Z (DOS) or controlD (Unix).
C:\qparser\examples>g1 >> a b 15 "a string" >>control-Z C:\qparser

This is not very exciting, but at least we didn't get any syntax errors. If you need convincing that g1 really did parse the input strings, run it with option d2 or -D2, and you'll see all the parsing actions. (d1 shows only the reduce actions; -d2 shows reduce, shift and lookahead actions.)

Adding Semantics: Grammar g2


Were going to add some semantics code to this grammar that reports the numbers, identifiers and strings as they appear when the parser runs, calling the new grammar g2.grm. (You may already have done this in chapter 4).
// grammar g2.grm lexfile=../lib/c.lex classdefs={ Ctoken: EOF, Identifier, Real, Integer, String; } globals= { #include "eval.h" } G : S EOF S : S E

Chapter 10: LR Parser Semantics, page 242

S E E E E

: : : : :

E Identifier #ID {cout << Identifier->getStringValue() << endl;} Real #REAL {cout << Real->getDouble() << endl;} Integer #INT {cout << Integer->getInteger() << endl;} String #STR {cout << String->getStringValue() << endl;}

The semantics associated with the E : Identifier production just fetches the string value associated with the Identifier nonterminal, and sends it to standard output. endl is the preferred C++ way of writing a carriage-return and line-feed to standard output. For the E : Real production rule, we print the double-precision value associated with the token Real. Similarly for the Integer and String production rules. Hence this production rule will accept any sequence of identifiers, numbers and strings. It will report each one as they appear in their reduce actions. As usual, this should be edited into a file, which well call g2.grm, then built into a translator. This is no longer just a parser, since it will generate some output through the cout statments. Heres a sample run with this program. Input is in Italics:
C:\qparser\examples>g2 >> 15 15 >> 22.7E-3 0.0227 >> "a C string\n" a C string\n >> Edgar_Guest Edgar_Guest >>

As you can see, this parser echoes each lexical token to the output. By using the option d2 when g2 is run, we can see exactly when the output occurs:
C:\qparser\examples>g2 -d2 >> 15 --> SHIFT state 8 on token SHORT 15 0 8 ...Enter to continue --> REDUCE state 2 on E: Integer #INT 0 8 1 2 Integer SHORT 15 ...Enter to continue 15 --> REDUCE state 5 on S: E 0 8 1 5 E SHORT 15 ...Enter to continue >>

Notice that the 15 is printed just after the REDUCE report for state 2, but before the REDUCE operations are completed. The production rule E: Integer carries the code that refers to the Integer on the stack top. Thats shown as a 15 in the stack dump, and, as we can see, our code

Chapter 10: LR Parser Semantics, page 243

cout << Integer->getDouble() << endl

references this value and prints it.

Carrying Intermediate Values on the Stack


We can create Csem objects of our own design in the production rule code, and cause them to be carried in the parser stack. This makes it possible to evaluate something in response to a production rule, then return an evaluation as a parser stack pointer. Its always better to carry compiler and/or interpreter information on the parser stack, because of the recursive nature of most languages. If you try to carry variables or pointers as globals, any recursion will destroy old values. Creating a special parse-time stack also isnt necessary, since the existing one is designed for that purpose. You need to associate some derived class of Csem with each nonterminal that is supposed to carry information. Note that every production rule has a nonterminal as its left member. When a rule is called out in a REDUCE action, wed like to return a value by stuffing it into its left member. That left member nonterminal can later on return as a nonterminal in the right member of some other production rule. For example, lets associate a type double value with nonterminal E in some grammar. E is supposed to represent some arithmetic expression when the parser runs. Then when we encounter a production rule like
E : Real

then we clearly want to fetch the value of the Real and attach it to the E. Heres what we need to do to make that happen: 1. Design a derived class of Csem that supports the required information. We'll call it Cexpr. 2. Allocate Cexpr from the Heap as needed, 3. Use the production rule nonterminals as pointers to instances of Cexpr, 4. Consider the left member of a production rule as something to assign to, and 5. Consider any right nonterminal members as something that carries a pointer to a Cexpr.

Design a derived class of Csem.


In our example, we need a derived class of Csem that carries a double. We'll call it a Cexpr. Here's a Cexpr definitionit can be found in examples\eval.h. (We've been including this file with all our grammar examples).
#define CEXPR 3 class Cexpr : public Csem { private: double dvalue; public: Cexpr(void) : Csem(DOUBLE), dvalue(0) {} Cexpr(double dv) : Csem(DOUBLE), dvalue(dv) {} void setValue(double dv) {dvalue= dv} double getDouble(void) {return dvalue} virtual void dump(ostream& out) virtual int classCode(void) {return CEXPR} };

Chapter 10: LR Parser Semantics, page 244

This carries a double-precision value in dvalue. Notice that it has a new class code CEXPR. Its important, by the way, that the CEXPR define name is the same as the class name Cexpr, except in upper case. Also notice theres a constructor that accepts a double-precision value, and that it sets the data member dvalue. That value can later be fetched with getDouble(), and changed with setValue(double dv).

Allocate Cexpr from the Heap as needed


We can allocate an instance of a Cexpr through the use of the new operator, sending the pointer value to the production rules left member, like this:
E : Real #EREAL { E = new Cexpr(Real->getDouble()); }

Recall that we have a constructor for a Cexpr object that accepts a double precision number as its parameter. The new operator allocates space for this object, returning a pointer to it. Finally, the Qparser system will make sure that this pointer will be pushed onto the stack after popping the Real pointer.

Use the production rule nonterminals as pointers to instances of Cexpr


The E in the left member, and any appearance of E in a right member of any production rule should now be associated with a Cexpr pointer.

Consider the left member of a production rule as something to assign to


We've seen this rule used previously, in this example:
E : Real #EREAL { E = new Cexpr(Real->getDouble()); }

Here, the left member of the production rule E : Real is being assigned-to. It must receive a pointer to a Cexpr object, and it receives that through the new operation. Do not assign a production rule left member to anything but an object allocated from the heap. For example, dont use a pointer to a local or global variable, or you will get a mysterious crash sometime during parsing.

Consider any right nonterminal member as something that carries a pointer to a Cexpr.
We've seen this rule previously:
E : E + T #ADD { cout << E.2->getDouble() << endl; }

This accesses the pointer associated with E in the right member. That E needs to be referenced as "E.2" rather than E to avoid confusion with the left member.

Chapter 10: LR Parser Semantics, page 245

The apply Function


Let's examine just how a grammar is converted into C++ code to form a translator. We've seen how a C++ code fragment can be associated with a production rule. It seems to be executed when that rule is called up in a REDUCE action by the LR parser. How is this achieved? Essentially, the Qparser system writes source code files. In particular, it generates file apply.cpp. It uses a so-called skeleton file lib\apply.skc for the task, along with information found in grammar tables. In a nutshell, here's what happens: program nlr1 scans the grammar file. It produces the LR machine, gets scanner information, and also collects and transforms the code fragments. These are written into a binary file grm.tbl, where grm is the name of the grammar. program lr1p reads both the binary grammar file grm.tbl, and the skeleton file lib\apply.skc. The skeleton file is a text file (you can read it with a text editor), which is mostly copied into the target. However, it contains special break codes that cause information from the table file to be formatted and written to the target. the result is a compilable file apply.cpp, which has been tailored especially for your grammar. Most of the special grammar code is written into function apply. apply will be called in a REDUCE state, just before any stack pop operations. Recall that the production rule is known, so the stack contents will correspond to the right members of the parser stack at this moment. Each semantics code fragment added to a production rule is essentially copied from the grammar file into apply. However, a few changes are made: every reference to a production nonterminal is converted into a stack reference or a return reference. the production rule becomes a comment in the apply code. a production rule action is selected through a case statement based on the parser state and a table called map. Let's see how a typical grammar rule becomes translated by these operations. Consider this rule:
Expr : Expr + Term #PLUS { Expr= new Cexpr(); Expr.1->setValue( Expr.2->getDouble() + Term->getDouble()); }

This will be translated into the following portion of function apply:


{ returnRef= (Csem*) new Cexpr(); ((Cexpr*)returnRef)->setValue( ((Cexpr*)(stackRef(2))) ->getDouble() + ((Cexpr*)(stackRef(0)))->getDouble()); }

The translation proceeds by first looking for any instance of a nonterminal name in the C++ code fragment. These are clearly the names Expr and Term. The Expr.1 and Expr.2 notation is also considered as a way of distinguishing different meanings of the same name in a production rule. Then: an appearance of a name as a left member, i.e. Expr.1 is turned into returnRef. This variable holds the return value of the apply function, and will be pushed into the stack, replacing the right members of the production rule. A typecast is also inserted judiciously just after the "=" sign.

Chapter 10: LR Parser Semantics, page 246

an appearance of a name as a right member element is replaced by a function call, stackRef(n), where n refers to the location of the element in the stack relative to the stack top. This is also cast to the specified class type associated with the element. Thus, the statement
Expr.1= new Cexpr(); returnRef= (Csem*) new Cexpr();

is translated into The variable


Expr.2

is translated into
((Cexpr*)(stackRef(2)))

The extra parentheses make sure that the cast applies to the function call, not something that the call points to. Thus
Expr.2->getDouble()

turns into
((Cexpr*)(stackRef(2)))->getDouble()

which then works correctly when the parser is run. The apply function is essentially a big switch statement. It's called with a state variable, which can be used to decide which of the production rule code fragments to call. Here's a portion of a typical apply function, as it's generated. You can now see how the PLUS production rule semantics is selected. The variable returnRef is initialized to 0 in case no rule is selected, and is returned in any case. After each return from calling apply, the parsing system pops the stack by n elements, where n is the number of symbols in the right member of the production rule. The pointer value of returnRef (returned from apply) is then pushed on the stack. In order to make all this very generic, apply and the parser can only assume that a pointer to a Csem object is in the stack. It essentially knows nothing of the derived classes of Csem which in fact are carried in the stack. This is why the complicated casts are required in this code.
Cpars::apply(int cstate) { // call on a semantics action const semType pflag= map[cstate]; Csem* returnRef= 0; switch (pflag) { case GENL_KIND: // default flag break; case PLUS: // Expr : Expr + Term { returnRef= (Csem*) new Cexpr(); ((Cexpr*)returnRef)->setValue( ((Cexpr*)(stackRef(2))) ->getDouble() + ((Cexpr*)(stackRef(0)))->getDouble()); } break; // ... many more cases in here default: syntaxError("unrecognized PFLAG: %d", pflag); } return returnRef; }

Chapter 10: LR Parser Semantics, page 247

What Happens During Parsing


Lets review our example production rule with semantics: E : Real #EREAL
{ E = new Cexpr(Real->getDouble()); }

In a SHIFT action, the lexical analyzer will scan the source text, will find a real number, and will allocate a Ctoken object carrying the number's value. A pointer to this object will be pushed into the parser stack by the SHIFT action. The EREAL rule will be triggered in a REDUCE. This comes after the SHIFT of the real number into the parser stack. The real number's Ctoken should be on the stack top, and can therefore be referred to symbolically through the nonterminal name Real. Before the stack is popped in the REDUCE operation, the parser will execute the code fragment associated with this production rule, the semantics portion of this production rule. The Real->getDouble() call will use the pointer on the stack top associated with the nonterminal Real, to fetch the double precision number placed there by the lexical analyzer. The getDouble member function fetches the floating-point number carried by the Ctoken object associated with Real. That double precision number is passed to a Cexpr constructor, invoked by new, which will allocate an object of that type from the heap, and set its data member to the double value. The pointer to the newly allocated Cexpr object is returned to the parser system. It will be pushed on the stack after the right-member pointers have been popped off the stack. This is part of the REDUCE operation that we studied earlier. It will therefore be available on the stack to other production rules that have an E nonterminal in their right member.

More Semantics: grammar g3


Lets rework our arithmetic grammar g0.grm, given in chapter 4, extending it to support more semantic operations. Heres the original grammar, with no semantics:
// g0.grm lexfile=../lib/c.lex classdefs= { Ctoken: EOF, Identifier, Integer, Real, String; } globals= { #include "eval.h" } G E E T T : : : : : E EOF E + T T T * F F // grammar g0

Chapter 10: LR Parser Semantics, page 248

F F F F F

: : : : :

( E ) Identifier Integer Real String

Lets have the F nonterminal carry a double-precision value through the Cexpr class. The value will be obtained from one of the production rules
F : Real F : Integer

Heres what these rules look like with semantics based on the Cexpr class:
F : Real #FREAL {F= new Cexpr(Real->getDouble())} F : Integer #FINT {F= new Cexpr((double) Integer->getInteger())}

The member function call Integer->getInteger() requires a double cast in order to access the correct Cexpr constructor. (However, it happens that most C++ compilers will in fact cast a long integer to a double and will choose this constructor without the cast). Now we have the F nonterminal associated with a double value, in two of the four F production rules. F will be NULL for the other two production rules, and that can cause problems during parsing. For now, well just make sure they arent invoked at runtime. We see that F can reappear in the production rule
T : T * F

Lets see if we can retrieve the double value associated with it, through more semantics code, like this:
T : T * F #MPY { cout << F->getDouble() << endl}

Heres the grammar extended this way. We'll call it g3:


// g3.grm lexfile=../lib/c.lex classdefs={ Ctoken: EOF, Identifier, Real, Integer, String; Cexpr: F; } globals= { #include "eval.h" } G E E T T F F F F F : : : : : : : : : : E EOF E + T T T * F #MPY { cout << F->getDouble() << endl;} F ( E ) Identifier Real #FREAL {F= new Cexpr(Real->getDouble());} Integer #FINT {F= new Cexpr((double) Integer->getInteger());} String

Chapter 10: LR Parser Semantics, page 249

We build a parser in the usual way. Heres a sample run, in which weve asked for each REDUCE operation to appear. The SHIFT moves arent shown to save space, but if you use d2 instead, you can see how the 35 and 17.3 are picked up and pushed on the stack through SHIFTs:
C:\qparser\examples>g3 -d1 >> 35*17.3

The 17.3 should be picked by F: Real and later printed by the T:T*F production rule. But the 35 is caught first
--> REDUCE state 2 on F: Integer 0 11 1 2 Integer SHORT 35 ...Enter to continue --> REDUCE state 5 on T: F 0 11 1 5 F 35 ...Enter to continue #FINT

Here's where the 17.3 is caught and reduced through F:Real. (The SHIFT actions aren't shown here) The value 17.3 is in the parser stack, at stack index 3. The previous value 35 is also in the stack, at stack index 1. (TOS is index 3)
--> REDUCE state 3 on F: Real #FREAL 0 11 1 14 T 35 2 17 * 3 3 Real DOUBLE 17.3 ...Enter to continue

Now we're ready for the REDUCE action on production T: T*F. The 35 is in the T position, and the 17.3 in the F position, exactly as we expect. The semantics for this rule prints the value at stack index 3, the 17.3
--> REDUCE state 10 on T: T * F 0 11 1 14 T 35 2 17 * 3 10 F 17.3 ...Enter to continue 17.3 #MPY

Here's the printed value. If parser debugging is suppressed, this is all you see. There's more, but we'll end the action here.

Adding more Calculator Functions


Well add more calculator functions to our grammar. Lets include subtraction and division operations to grammar g0, calling it g4.grm. Well also drop the Identifier and String production rules, since they dont belong in our simple calculator yet. So here's g4.grm with these two changes and no semantics:
// g4.grm, with - and / operators, also no Identifier or String

lexfile=../lib/c.lex classdefs= {

Chapter 10: LR Parser Semantics, page 250

Ctoken: EOF, Integer, Real; } globals= { #include "eval.h" } G E E E T T T F F F : : : : : : : : : : E EOF E + T E - T T T * F T / F F ( E ) Real Integer

Don't type this up yet--we're going to add semantics to this.

The Single-Production Rules


For now, note that the rule
T : F

should cause any value associated with F to be transferred to T. In other words, if F is associated with a Cexpr, then so should T. Similarly, the rule E : T implies transferring any T value to E. These two production rules are called single production rulesthey consist of a single nonterminal as the right member. The Qparser system will in fact automatically carry any semantics associated with the right member over to the left member. That amounts to doing nothing in the REDUCE operation. The system ordinarily pops one item, then pushs one item on the stack. To carry over the semantics pointer from the right to the left side, the system just does nothing instead! The Multiplication Rule Now lets look at the multiplication rule. Weve discovered that if F is associated with a Cexpr, then so should T and E. Therefore, when the multiplication rule
T : T * F

is ready to reduce, we should have a Cexpr object associated with both the T and the F in the right members. These in turn carry double-precision numbers. Were expected to return a pointer to a Cexpr object containing a double-precision value. Obviously that value will be the product of the values associated with T and F. So the code for this rule should look like this:
T= new Cexpr(T.2->getDouble() * F->getDouble());

We need to follow a similar pattern for all the other binary operators.

Printing the Result


The resulting calculation will eventually appear when the ru
G : E EOF

is reduced. Note that this is the last rule reduced, since the LR parser does a derivation in reverse order. So the E here should be associated with some calculated value, depending on various operators Chapter 10: LR Parser Semantics, page 251

implied in the derivations from E. Lets display the answer like this:
cout << E->getDouble() << endl;

Dealing with the Parentheses


The rule F : ( E ) is not a single-production rule. We need to make sure the pointer associated with E is transferred to F, which is easy to do with the code fragment:
F= E;

More Semantics: grammar g4


Heres the complete grammar, g4.grm so far:
// G4.grm lexfile=../lib/c.lex classdefs={ Ctoken: EOF, Identifier, Real, Integer, String; Cexpr: E, T, F; } globals= { #include "eval.h" } G E E E T T : : : : : : { E E E T T T EOF #GOAL { cout << E->getDouble() << endl;} + T #ADD { E= new Cexpr(E.2->getDouble() + T->getDouble());} - T #SUB { E= new Cexpr(E.2->getDouble() - T->getDouble());} * F #MPY { T= new Cexpr(T.2->getDouble() * F->getDouble());} / F #DIV

if (F->getDouble() == 0.0) { cout << "Division overflow" << endl; T= new Cexpr(0.0); } else T= new Cexpr(T.2->getDouble() / F->getDouble()); T F F F } : : : : F ( E ) #PAREN {F= E;} Real #FREAL {F= new Cexpr(Real->getDouble());} Integer #FINT {F= new Cexpr((double) Integer->getInteger());}

This is now a complete calculator. Its only drawback is that it accepts just one numeric expression on each invocation. There's no concept of accepting a series of expressions or assignment statements. There are also no variables--they require doing something with the Identifier token.

Chapter 10: LR Parser Semantics, page 252

Testing g4.grm
As usual, save the edited grammar file g4.grm, modify the makefile, then call make to generate a parser program. g4 will now be a four-function calculator that expects a single expression, then reports its value. Heres a test run that shows that it appears to work correctly:
C:\qparser\examples>g4 >> 3*(8-3) >> ^D // control-D to terminate the source (Unix) // ... use control-Z in DOS 15 // the answer

As usual, we need to send an EOF to the lexical analyzer, which requires entering control-D (in Unix) or control-Z (in DOS). You won't see any output if you kill the program with control-C, because that robs the parser of completing its parsing actions.

Preventing Division Overflow


The division operator will fail if the divisor happens to be zero. Since most systems use the IEEE floating-point standard, the result of dividing a non-zero by zero is +infinity or infinity. Further arithmetic with infinity is possible, but not practical. We prevent that from happening by testing for a zero divisor. Thats the purpose of the code associated with the DIV production rule:
T : T / F #DIV { if (F->getDouble() == 0.0) { cout << Division overflow << endl; T= new Cexpr(0.0); } else T= new Cexpr(T.2->getDouble() / F->getDouble()); }

This will generate a friendly message and make sure that the returned value for T is reasonable.

Using Identifiers and the Symbol Table


A symbol table and associated methods are provided in the lib and calc directories. A symbol table is allocated by instantiating a Csymtab<attribute> template. The attribute should in general be another class that describes the attributes carried by the symbol table. See chapter 6 for more details. We can incorporate names into our calculator system by providing some way to attach a value to a name. Thats done in most programming languages through an assignment statement, like this:
a := 22+15;

which causes the current value of a to be 37. Lets add production rules for an assignment statement, and better, permit a sequence of assignment statements terminated by semicolons, rather than just a single statement. To do that, we need to replace the G : E EOF production rule with these:
G : StmtList EOF // NOTE: these are REAL semicolons in the following // Each statement "Stmt" will be terminated by a semicolon StmtList : StmtList Stmt ;

Chapter 10: LR Parser Semantics, page 253

StmtList : Stmt ; Stmt : Identifier ":=" E

Notice how the goal G is now a sequence of Stmt forms. Each Stmt is an assignment statement, followed by a semicolon, rather like in the C language. (If you prefer the C syntax, just change the ":=" to "="). Our expression E appears as the right side of the assignment statement. The := token must be quoted to avoid confusion with the colon symbol. We also dont need to add any semantics to the StmtList or G production rules, since these dont have to carry any information. However, the assignment statement needs semantics. In the process of adding code, well review how the symbol table system is utilized.

Symbol Object
The Qparser symbol table system was described in chapter 6. Please review that if necessary. We need an attribute class Csymbol to carry a double-precision value. That could look like the following, which can be found in calc/csymbol.h:
class Csymbol { private: double dvalue; public: Csymbol(double dv= 0.0) : dvalue(dv) {} Csymbol(const Csymbol &csym) : dvalue(csym.dvalue) {} virtual void dump(ostream& out) const { out << dvalue; } double void Csymbol& { }; getDouble(void) const {return dvalue;} setValue(double dv) {dvalue= dv;} operator= (const Csymbol& source) dvalue= source.dvalue; return *this;}

The important member is dvalue, with its two member functions getDouble and setDouble. Notice that this object can be constructed either with or without an initial value. We need to allocate a symbol table object, which well do by adding two lines to the globals section, like this:
globals= { #include "eval.h" #include "csymbol.h" Csymtab<Csymbol> symtab; }

The second include pulls in our symbol table object definition. The symtab declaration will create a global symbol table for carrying our symbols.

Assignment Operations
In an assignment statement an identifier (Identifier) appears on the left side. If this were a complete programming language, we should make sure that the identifier is declared. However, we have here just

Chapter 10: LR Parser Semantics, page 254

a sequence of assignment statements with no declarations. So we consider the first appearance of an identifier on the left side of an assignment as a declaration. It follows that we dont need to find it in the symbol table, just enter it using pushSymbol, like this:
Stmt : Identifier "=" Expr #ASSIGN { // Identifier may or may not be in the symbol table double value= Expr->getDouble(); const string np= Identifier->getStringValue(); Csymbol symp(value); symtab.pushSymbol(np, symp); cout << np << "= " << value << endl; }

The plan here is to push the new identifier in the symbol table whether or not it's appeared previously. This little language has no declarations for identifiers, so we use the first appearance of a name in the left side of an assignment as its declaration. If it isn't in the symbol table, we enter it along with the value associated with E. If it is in the symbol table, we just replace its current value with that of E. We also report the value of E through the cout line shown. This strategy has a certain downside using the same symbol on the left side of an assignment several times will result in that many appearances in the symbol table. That will go unnoticed in this exercise, but may cause problems in other compiler applications. We could get around that by testing for the identifier in the symbol table first, using an existing one (if there) and entering a new one only if its not there. Note that getStringValue() returns a const string&, so we declare a string np to receive it. Also, we use the Csymbol constructor that accepts the current value of Expr as a parameter..

Identifier Reference
We can now deal with the issue of an identifier reference, which will appear as an F production rule, like this:
F : Identifier

This production rule will be triggered on any appearance of the identifier in the right member of an assignment statement, for example, X in the assignment statement
A := X*15;

The code we need should first check whether this identifier is in the symbol table. If it isnt, its never been declared by appearing in the left side of an assignment statement. That deserves an error complaint. If it is in the symbol table, we can fetch its value and send it on to the F nonterminal as usual. So heres what that code looks like:
F : Identifier #IDENT { const string np= Identifier->getStringValue(); Csymbol cp; if (symtab.findSymbol(np, cp)) { F= new Cexpr(cp.getDouble()); } else { cerr << "undeclared identifier: " << np << endl; F= new Cexpr((double) 0.0);

Chapter 10: LR Parser Semantics, page 255

} }

The remainder of the grammar file is exactly as before.

Testing the Grammar


Call this new grammar g5.grm. As usual, set this file name in makefile, then call make (or nmake) to generate a new parser, which will now be a somewhat more sophisticated calculator. Here's the complete g5.grm:
// G5.grm lexfile=../lib/c.lex classdefs={ Ctoken: EOF, Identifier, Real, Integer, String; Cexpr: E, T, F; } globals= { #include "eval.h" #include "csymbol.h" #include "symtab.h" Csymtab<Csymbol> symtab; } G : StmtList EOF StmtList : StmtList Stmt ; StmtList : Stmt ; Stmt : Identifier "=" E #ASSIGN { // Identifier may or may not be in the symbol table double value= E->getDouble(); const string np= Identifier->getStringValue(); Csymbol symp(value); symtab.pushSymbol(np, symp); cout << np << "= " << value << endl; } + T #ADD { E= new Cexpr(E.2->getDouble() + T->getDouble());} - T #SUB { E= new Cexpr(E.2->getDouble() - T->getDouble());} * F #MPY { T= new Cexpr(T.2->getDouble() * F->getDouble());} / F #DIV

E E E T T

: : : : : {

E E T T T

if (F->getDouble() == 0.0) { cout << "Division overflow" << endl; T= new Cexpr(0.0); } else T= new Cexpr(T.2->getDouble() / F->getDouble()); T F F F } : : : : F ( E ) #PAREN {F= E;} Real #FREAL {F= new Cexpr(Real->getDouble());} Integer #FINT {F= new Cexpr((double) Integer->getInteger());}

Chapter 10: LR Parser Semantics, page 256

F : Identifier #ID { const string np= Identifier->getStringValue(); Csymbol cp; if (symtab.findSymbol(np, cp)) { F= new Cexpr(cp.getDouble()); } else { cerr << "undeclared identifier: " << np << endl; F= new Cexpr((double) 0.0); } }

Heres a test. Notice that we now must enter an assignment statement followed by a semicolon. If you don't enter the semicolon, the parser will expect more expression material.
C:\qparser\examples>g5 >> a=15; // declare an a 15 >> b=a-5; // use a, also declaring b 10 >> c=b*(8-3); 50 >> // we can type a comment, which will be ignored >> // ...then retype the previous expression, except across several lines >> c = >> b*(8-3) >> ; // the result is printed when the semicolon is scanned 50 >> d= x+a; x is undeclared // x hasnt been defined 15 >> b=22; // using b again 22 >>

An Integer Compiler -- compi


A grammar based on G0 can be used to generate integer assembler code for simple expressions in a manner similar to that described in chapter 8. Let's start with this grammar, which we'll call compi.grm:
// // COMPI -- A compiler for integer expressions in C-style assignment statements. No optimizations

lexfile= ..\lib\C.lex classdefs= { Ctoken: Identifier, Integer, EOF; } globals= { #include "csymbol.h" #include "eval.h"

Chapter 10: LR Parser Semantics, page 257

} Goal Stmts : Stmts EOF #QUIT : Stmts Stmt ; | Stmt ; : Identifier "=" Expr #ASSIGN : Expr + Term #PLUS | Expr - Term #MINUS | Term : Term * Unary #MPY | Term / Unary #DIVIDE | Unary : - Primary #UMINUS | + Primary | Primary

Stmt Expr

Term

Unary

Primary : ( Expr ) | Identifier #VARIABLE | Integer #INTVAL

This grammar describes a sequence of assignment statements in C style (note the use of "=" rather than ":=" for the assignment operator). Let's consider how to generate reasonable single-register code using this grammar. For example, the rule
Expr : Expr + Term

should produce something like this in assembler:


mov push mov pop add AX,something AX AX,something DX AX,DX ; from Expr.2 ; from Term ; from this production rule

We were able to do this easily with the recursive descent compiler of chapter 8. It permitted us to emit some code at any edge in any syntax diagram. However, this bottom-up production rule parser is fundamentally different. No semantics can be emitted until the right member of a rule is complete. The LR parser doesn't know which rule will come up in a REDUCE state until the right member is fully parsed. Furthermore, all the material implied by Expr.2 and Term will already have been scanned when this rule pops up. Nevertheless, the operations happen in roughly a left-to-right fashion. Expr.2 is parsed before Term. So we could let the Primary production rule emit an instruction for loading AX, like this:
Primary : ( Expr ) | Identifier #VARIABLE { const string& np= Identifier->getStringValue(); Csymbol cp; if (!symtab.findSymbol(np, cp)) syntaxError("undeclared identifier: %s", np); cout << " mov EAX," << np << endl; } | Integer #INTVAL

Chapter 10: LR Parser Semantics, page 258

{ cout << " } mov EAX," << Integer->getInteger() << endl;

For an Identifier such as abc, this generates the assembler code


mov mov EAX,abc EAX,38

For a literal integer such as 38, this generates the code These instructions also come out before any of the operations on them do. So far so good.

Problem with Binary Operators


The difficulty we face is that we need to emit
push EAX

between the first and the second operand of a binary operator. That is, we want the effect of
Expr : Expr + { cout << "push EAX\n" } Term

but the LR parser won't permit that to happen. The production rule
Expr : Expr + Term

is indivisible an LR parser expects the whole right side to be fully parsed before this rule is reduced. Dropping the push EAX in the Expr rule wont do, either, since that will happen on every Expr. However, there is a way to do this. We just introduce another production rule that catches the + operator, like this:
Expr : Expr Plus Term #PLUS

The new nonterminal Plus is supposed to derive a + token, like this:


Plus : + #PLUSOP

Notice that the PLUSOP rule will be reduced before the PLUS rule is reduced. It's also reduced before the Term in the PLUS rule. Assuming that our modified grammar is still acceptable to our LR parser (i.e. no conflicts), which it is, we can generate the push AX in the PLUSOP rule. It'll come out after the Expr.2 has been reduced, but before the Term has been reduced, just what we need! So here's how the addition rules look with these simple changes, and our semantics code in place:
Expr : Expr Plus Term #PLUS { cout << " pop EDX\n"; cout << " add EAX,EDX\n"; } Plus : + #PLUSOP { cout << " push EAX\n"; }

With these two rules, and the Primary semantics described earlier, we get this assembler emitted for a+b:
mov push mov pop add EAX,a from the Primary VARIABLE rule EAX from the PLUSOP rule EAX,b from the Primary VARIABLE rule EDX from the PLUS rule EAX,EDX

The other three operators (-, *, /) are handled in a similar way. Here's what the complete grammar looks like for a sequence of assignment statements, generating Pentium integer assembly code:
// // COMPI -- A compiler for a integer expressions in C-style assignment statements. No optimizations

Chapter 10: LR Parser Semantics, page 259

lexfile= ..\lib\C.lex classdefs= { Ctoken: Identifier, Integer, EOF; } globals= { #include "csymbol.h" #include "eval.h" } Goal Stmts : Stmts EOF #QUIT : Stmts Stmt ; | Stmt ; : Identifier "=" Expr #ASSIGN { // Identifier may or may not be in the symbol table const string np= Identifier->getStringValue(); Csymbol symp; if (!symtab.findSymbol(np, symp)) { symtab.pushSymbol(np, symp); } cout << " mov " << np << ",EAX\n"; } : Expr Plus Term #PLUS { cout << " pop EDX\n"; cout << " add EAX,EDX\n"; } | Expr Minus Term #MINUS { cout << " pop EDX\n"; cout << " sub EAX,EDX\n"; cout << " neg EAX\n"; } | Term : Term Mpy Unary #MPY { cout << " pop EDX\n"; cout << " imul EDX\n"; } | Term Div Unary #DIVIDE { cout << " mov ECX,EAX\n"; cout << " pop EAX\n"; cout << " cdq\n"; cout << " idiv ECX\n"; } | Unary

Stmt

Expr

Term

Plus : + #PLUSOP { cout << " }

push EAX\n";

Chapter 10: LR Parser Semantics, page 260

Minus : - #MINUSOP { cout << " } Mpy : * #MPYOP { cout << " } Div : / #DIVOP { cout << " } Unary

push EAX\n";

push EAX\n";

push EAX\n";

: - Primary #UMINUS { cout << " neg EAX\n"; } | + Primary | Primary

Primary : ( Expr ) | Identifier #VARIABLE { const string np= Identifier->getStringValue(0); Csymbol cp; if (!findSymbol(np, cp)) syntaxError("undeclared identifier: %s", np); cout << " mov EAX," << np << endl; } | Integer #INTVAL { cout << " mov EAX," << Integer->getInteger() << endl; }

Here is a sample run with this little compiler. Two assignment statements are compiled into reasonable assembler:
1: // compi.in 2: 3: a = 22*13; mov EAX,22 push EAX mov EAX,13 pop EDX imul EDX mov a,EAX 4: b = (1-a)/(-a + 16); mov EAX,1 push EAX mov EAX,a pop EDX sub EAX,EDX neg EAX push EAX

Chapter 10: LR Parser Semantics, page 261

mov neg push mov pop add mov pop cdq idiv mov

EAX,a EAX EAX EAX,16 EDX EAX,EDX ECX,EAX EAX ECX b,EAX

Garbage Collection in Qparser


You are probably wondering what happens to all the objects that get allocated from the heap (with new) and are then apparently ignored later, during the parsing phase. Isnt it necessary to write some delete calls so that these wont create a memory leak? This is a special problem for programs written in C++: any objects allocated from the heap with new should eventually be deleted before exiting from the program. It happens that this isn't too important for a compiler, since it's supposed to work through a source file, then exit, and the operating system is expected to clean up any memory leak problems you may have. In general, we recommend not trying to delete objects until your compiler strategy is firm, and you feel you must to release memory resources. If you delete the same object twice, in general, you'll generate a memory manager trap. Also, if you try to dereference an object that's been deleted, you'll get some kind of trap. In particular, Visual C++ (in debug mode) will fill each deleted object with the letter "d", so that such a trap will be sure to occur, and you can spot the problem. It will also not reuse deleted space, to prevent the letter "d" from disappearing through a later allocation of that space. However, the Qparser system does make limited use of deletion in the parser stack, because there's so much traffic in new and delete operations. If you do nothing about the Csem objects that appear on the stack, the parser system will essentially look for the ones about to be discarded after a REDUCE action, and release them. Recall that a REDUCE is supposed to strip some elements from the parsing stack, so these are prime candidates for deletion. However, it's not as simple as that. Here are the details: Upon each return from function apply, the parser looks at the returned pointer, which is associated with the production rules left member. If the return pointer agrees with one of the right member pointers (on the semantics stack), then all but that pointer is assumed to be subject to release, and a delete call is invoked on them. If it doesnt agree with any of the right member pointers, then all of them are released. The parser treats a single production as a special case. A single production has a single terminal or nonterminal as its right member. If the production does not have a tag (and also no semantics), then the right member pointer is kept on the stack, effectively becoming the left member pointer. This is usually what you want. Single productions provide a kind of transfer function, and you generally want the semantic objects to be transferred from the right member to the left member as well. If the single production does have a tag, then the usual checking rules apply. The return pointer may or may not agree with the right member pointer. If it does, then the pointer is just left on the stack. If it doesnt, then the old pointer is deleted, and the new one takes its place. So if you must change the semantics on a single production, then you must tag the production rule and

Chapter 10: LR Parser Semantics, page 262

provide the appropriate semantics code, like the following example


A : B #CHANGE { A= new Cvalue(B); // fix up A based on Bs values }

Defeating Garbage Collection


You may not want this default kind of garbage collection. For example, its common to just pick up some of the object pointers in the right members and build them into the returned object, as part of a struct or class. We'll do this in chapter 11 as a way of improving target code generation. For example, consider a Cnode object like the following that takes other Cnode objects as pointers. Weve left out a lot of supporting details:
#define CNODE 5 class Cnode : public Csem { char operator; Cnode *left, *right; public: Cnode(Cnode *l, char op, Cnode *r) : left(l), operator(op) right(r) {} ~Cnode(void); // need a destructor virtual void dump(void); // need to write this virtual int classCode(void) {return CNODE} }

The idea here is to construct a new Cnode object from a pair of pointers to other Cnode objects. We will later work through the data structure built by repeated applications of this constructor, and eventually delete the whole structure. Using this, we can write the production semantics like this:
E : E - T #SUB { E = new Cnode(E, '-', T); }

Its clear that we must defeat the automatic garbage collection scheme. Otherwise the parser will just delete the E and T nodes, which will cause havoc later on when we try to access left and right. There are two ways to defeat garbage collection. We can do it globally, by calling the Cpars member function defeatGC(). To do this, you need to force a production rule reduction as the first action in your grammar. This can be done by creating a rule that will be called as the very first action. Here's how to do that:
G : Open StmtList EOF Open : Empty #OPEN { defeatGC(); } StmtList : // etc.

Then, no automatic garbage collection on the compile-time stack is ever attempted. Its up to you to explicitly delete the stack objects in each and every production rule. If most of your rules in fact pick up pointers from the stack, this is the approach to take. The other way (which we recommend) is to call defeatGConce() somewhere in a particular Chapter 10: LR Parser Semantics, page 263

production rules semantics code, like this:


E : E - T #SUB { E = new Cnode(E, '-', T); defeatGConce(); }

This will cause automatic garbage collection to be suppressed for this particular reduce action, thereby protecting the two objects E and T from unwanted deletion. Automatic garbage collection is still performed for other rules, except for those in which you've switched off the action, of course.

Chapter 10: LR Parser Semantics, page 264

Chapter 11: AST-based Code Optimization


W. A. Barrett, San Jose State University nch11.doc, vs 3.1

Introduction
In this chapter, we examine how to generate optimized assembly code for an assignment statement. In doing so, we first review why optimization is needed, by examining the assembler code generated by the simple compiler in chapter 10. That compiler generated correct code for expressions and assignment statements, but not very optimal code. For example, It usually generates many more pushes and pops than are necessary, even with a single arithmetic register. Since a push or pop involves a memory access, its more expensive (in time) than a register-to-register operation. Some of these are easily eliminated altogether; others require attention to register allocation. It fails to make use of more than one of the six Intel registers available for arithmetic, instead using the runtime stack to push and pop intermediate values. The process is called register allocation. It fails to notice constant operations these should be evaluated in the compiler, not at runtime. Compiler constant evaluation is called constant folding. We havent looked at mixed types in expressions, i.e. the use of floats intermixed with ints. These can result in some awkward assembler code, calling for optimization. There are various arithmetic reductions that might be applied, to reduce the final instruction count.. It fails to take advantage of various special instructions provided by the microprocessor. For example, an incrementation of a variable can be done with a single instruction, but it may require some attention by the compiler to recognize. All of these problems can be cured by first generating an abstract syntax tree (AST), as described in chapter 5, then examining the tree with a view to generating minimal assembler code. Control structure optimizations look for the most efficient way of carrying out conditional branching operations. For example, we can completely eliminate one of the branches of an if-then-else statement if the boolean used in the testing is constant. Well examine this and other control issues in chapter 14. Block optimization carries the optimization process a step further. These inspect a sequence of assignment statements, with these two views in mind: Are any of the assigment statements useless? A statement S will be useless if it assigns to some variable V that isnt required in subsequent assignments, but is reassigned later. This is called redundant code elimination. The calculations that went into statement S can be dropped without changing any results. Can some calculations be saved in a temporary register and used later? This is called common subexrpression elimination. Well discuss this in chapter 15.

Chapter 11: AST-based Code Generation, page 265

Constant Folding
Lets start by looking at a translation generated by grammar compi.grm, given and discussed toward the end of chapter 10. Since compi follows about the simplest approach that we can find for an expression compiler, well refer to it in what follows. Heres a simple assignment statement that illustrates the need for constant folding, among other things:
a = 22*13;

and heres its translation:


mov push mov pop imul mov EAX,22 EAX EAX,13 EDX EDX a,EAX

Two questions immediately occur to us: why didnt this compiler notice that 22*13 is 286, and use that number? Instead it pushed this constant arithmetic into the runtime assembler code. Is the push and pop really necessary? In fact, this assignment statement could be translated into a single instruction, instead of these six:
mov a,286

Constants can appear in expressions in several different ways, which are not obvious at first blush. All of these should be considered in a highly-optimizing compiler framework: The const attribute essentially tells the compiler that that associated name is a constant. The compiler should just replace such names by their constant values before proceeding. All operations in the language are potentially run on constants, not just the usual arithmetic ones. For example, a comparison of two constants, like 15 > 16, results in a constant Boolean (false). Constant expressions are often used in declarations that initialize variables; they must obviously be folded, or the compilation will fail. When a constant Boolean appears in a conditional, the conditional expression or statement should be reduced to a non-conditional form. Some expressions are written with intermixed constants and variables. Sometimes a rearrangement (following the algebraic rules for associativity and commutivity, of course) can fold some constant operations.

Analyzing the Problem


The problem is with the simple strategy weve adopted for generating code it is essentially based on the assumption that we have a single register for intermediate results; if we need to hold a temporary, values are saved in the runtime stack, and code is generated essentially as for a postfix evaluator. If we look into the production rule reduce operations a little more deeply, it should be clear that most of these problems stem from the fact that we are trying to generate assembler code as we parse. An important consequence of this decision is that our compiler retains very little information about whats passed, and has no information about whats to come! It behaves like a zombie with no memory of the past, no concept of the future, only staggering about dropping assembler instructions as needed (but that Chapter 11: AST-based Code Generation, page 266

do in fact carry out the computation correctly!). The same problem arises with recursive descent. Examine the single-register compiler for the 80x86, and how it generates assembly code, and youll see that it suffers from the same defects. In fact, it generates the same assembler as compi, and for the same reasons it retains no information about the past, and has no concept of whats to come. The obvious solution to this problem, and a good start toward optimization, is to not try to generate code while parsing, instead save up the parsing information in a data structure, then later walk through the data structure with an eye to finding optimizations. Two forms of data structure lend themselves to optimization the AST and a sequence of quads. Well discuss what can be done with an AST in this chapter, reserving quad optimization for chapter 15.

Mixed Type Conversions


Another problem with the code-as-it-goes strategy may arise in dealing with mixed-mode arithmetic. This is where a binary operator is expected to work on two operands of different types, e.g. one is an integer, the other a float. Suppose variable r is type real and variable i is type integer. compi does not distinguish different number types, but it could easily be extended to do so. Lets explore the issues by considering the following multiplication:
i * r

compi will generate this instruction upon parsing the i, regardless of what follows:
mov EAX,i

At this point, it will discover the *, and realize that EAX must be pushed in the stack. So it emits this instruction:
push EAX

Note that at this point, compi is not aware that the right operand of this multiplication will be a float; in fact, it has no idea of whats coming in the source input. It's reasonable for each of the parsing functions to return a type as a return value so that the higherlevel functions can work out the resulting type. So we can expect the parser to convey back the information that an integer has been placed in EAX to its subsequent operations. Unfortunately, our extended compi will now encounter the floating-point number r. It is now in an awkward situation, having emitted the above two instructions. We have some integer in the CPU stack that really belongs in the FPU stack. Since our simple compiler has forgotten where that integer came from, and cant backtrack, it can only retrieve that stack value by popping it into some temporary memory location, then doing an fild on that memory location. (The FPU has no instruction for loading a float from a CPU register, nor for popping the CPU stack into its own stack). So the simple compiler code would have to be this:
mov push pop fild fld fmul EAX,i EAX $RTMP $RTMP r

In hindsight, it would have been better to have just emitted the FPU instruction
fild i

Chapter 11: AST-based Code Generation, page 267

which could then be followed by


fld fmul fmul r

A further optimization could replace the last two instructions with this single one:
r,st(0)

Lack of hindsight and foresight, as in life, tends to produce messy results.

Useless Operations
Here are some more problems with compi. Consider the following expressions:
0 0 1 a a + a * a * a / 1 mod 1 ; ; ; ; ; yields yields yields yields yields 'a' 0 'a' 'a' 0

In each case, the compiler should be able to reduce the operation. For example, 0+a should become something like
mov EAX,a

Yet compi will generate this assembler sequence for 0+a:


mov push mov pop add EAX,0 EAX EAX,a EDX EAX,EDX

Why? Again, because it has no hindsight nor foresight. It sees only a single operand or a single operator.

Assignment Optimization
The assignment statement affords several opportunities for optimization, which are ignored by compi. Most machine architectures provide a single high-performance instruction that can increment or decrement a number by a constant, and the constant 1 in particular. In Pascal, this operation can only be written as an assignment statement, like this:
i := i + 5;

or this:
i := 5 + i;

The optimal code on an 80x86 for either of these would be:


add i,5

compi does not recognize this optimization. It generates code to evaluate i+5, then additional code to save the result, like this:
mov push mov pop add eax,i eax eax,5 edx eax,edx

Chapter 11: AST-based Code Generation, page 268

mov

i,eax

Incrementing or decrementing a variable by 1 also has an assembly code shortcut on the 80x86 CPU:
inc i i

or
dec

These instructions require fewer bytes, and execute faster than the equivalent add or sub instructions, but are obviously not generated by compi. These would arise in Pascal from these statements:
i := i+1;

or
i := i-1;

Register Allocation
compi was written with the assumption that only one register, EAX, was available for use to carry a value at runtime. If an intermediate value needed to be saved, which happens often in more complicated expressions, its strategy is to spill values into the runtime stack using push and pop. It even spills values when it isnt necessary with a single register, as weve seen from some of the above examples. A better strategy would be to allocate registers to intermediate results, but this also calls for some foresight. A good register allocation strategy starts with an AST, then decorates the AST nodes with registers through a strategy that minimizes stack spillage.

Optimization with Abstract Syntax Trees


A modern approach to compiler optimization is based on carrying a large clause of the source language in an abstract syntax tree, or AST. The AST can easily be built during parsing using class objects connected with pointers. Syntax errors are reported during parsing as usual, but little or no code is generated until the AST is completely built. The AST can then be examined in a variety of ways, looking for an optimal approach to code generation. Recall from chapter 4 that an abstract syntax tree describes a sequence of arithmetic or logical evaluations as a tree. Each internal node of an AST is an operator. Each child node of an operator is an operand, which may be another instance of an AST or a leaf node. (An internal node has one or more children, while a leaf node has no children). The figure below illustrates an AST for the expression
a*b + c*d

+ * a b c * d

An AST represents an evaluation process through the simple rule that an operation may only be performed when the value of each of its child operands is known. The result of performing an operation is some value associated with the operators node. In this tree, the + operation clearly cannot be performed until both of the * operations have been performed.

Chapter 11: AST-based Code Generation, page 269

An AST could be constructed for each expression, in isolation from other expressions, or for each assignment statement, for each block of assignment statements, for each code of a function, or for the whole program. Constructing an AST for more than one assignment statement requires some attention to reporting semantics errors. These are not normally noticed until they are found during an AST evaluation, but by that time, the parsing has moved past the end of a block of assignment statements, or perhaps even to the end of the program. The error reports should be associated with source code lines, hence any such AST should also carry (at least) line number and file name information on each node. In this chapter, we will examine optimizations based on an assignment statement AST. Semantic errors discovered can be reported and will appear just after printing source lines. We will build the AST through our LR bottom-up parser, not a top-down parser. This turns out to be more efficient, since we can fold constants during the tree building with a bottom-up parser, not so easily done with a top-down parser. Note that an AST can be constructed with either parser in any case. We can perform type-checking during parsing, and verify that every identifier has an appropriate type for its operators. Every tree node can have a type associated with it, based on the types of its child nodes. For example, if an ADD node has an integer child and a real child, then the ADD node is clearly associated with type real. We can make these type assessments as the tree is built bottom-up, which is another good reason for choosing an LR rather than a recursive-descent parser. Each identifier must appear in the symbol table, at least, and we can immediately report an error during parsing for an undeclared identifier, or an identifier with a totally inappropriate type. The advantage of reporting errors during parsing is that the error can be pinned down to the current token. Some errors can only be detected after a second operand is parsedthese will be pinned to the last token of the second operand. Assignment type errors will only be detected when the parser finds the semicolon at the end of the assignment. However, any of these can be reported just after the assignment statement line is parsed, and (presumably) printed.

Synthesized vs. Inherited Attributes


Determining the type of a tree node, based on the nodes operator and the types of its children, is an example of attaching an attribute to the node. An attribute is some flag, pointer, or value carried by a node in the AST. In a programming context, it will be a data member of a class associated with the node. Typical attributes attached to a AST node includes these: the type associated with this node, a register assigned to this node, assuming that more than one register will be used during compilation, the line number and file name of the source file associated with this node, if necessary. Attributes may depend on the attributes of neighboring nodes, for example, the parent or the child nodes. When an attribute depends on the attributes of the children of that node, it is said to be a synthesized attribute. For example, the type of a node usually depends on the types of its children, so it is synthesized.

Chapter 11: AST-based Code Generation, page 270

When an attribute depends on the attributes of the parent of a node, it is said to be an inherited attribute. Inherited attributes are not often used, but present no particular problems if the entire AST is available, and a parent node can be located from any of the internal nodes. An attributed grammar is such that the semantics operations are performed entirely by specifying how the attributes of a node depend on those of its parent and its children. Its clear that, given an AST, attributes can be worked out in either direction with a suitable treewalking strategy. However, there are situations that must be avoidedfor example, if an attribute X of a node depends on a child attribute that depends on the same inherited node attribute X, a resolution is impossible.

Building an AST with Qparser


The grammar file as required by Qparser provides enough information to build an abstract syntax tree, with some help from the programmer. The AST is in fact just a reduced derivation tree. The method weve developed is simple, and depends on associating various nonterminal symbols with pointers to a class object, called a Ceval. Each Ceval represents one node in a tree. (In fact, it is derived from the Ctree class discussed in appendix 1). We introduce a special constructor for Ceval, which accepts a variable number of arguments. The first argument is a semType enumerated type that specifies the operator. This will be used to determine whether the node carries an addition (with two children), a multiplication, an identifier, a number, etc. The semType tag will be the same as a production rule tag. This is why its important to label production rules that carry semantics code. Some of the tags are associated with the set of lexical tokens. Please review the discussion surrounding semType in chapter 10, if you are fuzzy on the concept. Lets discuss the special Ceval constructor first.

A Variable Argument Ceval Constructor


We need a Ceval constructor that makes it easy to build a tree from an arbitrary list of other Ceval object pointers. Class Ceval is derived from several other base classes, including Ctree, which provides a mechanism for carrying siblings and children in a tree structure. Any node can have any number of children, and each child can have any number of children, etc. The implementation code of this constructor is as follows:
Ceval::Ceval(semType st, ...) : Ctree(0), Csem((short) st), ptype(tOTHER), typep(0), svalue(0), label(0), symp(0) { // Construct a Ceval object with child nodes. // The parameters are pointers to non-NULL Ceval objects, // which will become child nodes. // The last parameter must be 0 // Also, st must be a production rule tag, not a token tag va_list args; va_start(args, st); while (1) { Ceval *cp= va_arg(args, Ceval*); if (cp == 0) break;

Chapter 11: AST-based Code Generation, page 271

if (cp->classCode() == CEVAL) appendChild(cp); else if (cp->classCode() == CTOKEN) { Ctoken *ct= (Ctoken*) cp; // our mistake // the only legal Ctoken class here is an Identifier assert(ct->getsemType() == IDENT); Ceval *cp= new Ceval(ct->yieldString(), 1); // identifier appendChild(cp); delete ct; } else assert(0); } currentParser->defeatGConce(); va_end(args); }

Variable Arguments
Our new constructor makes use of a little-known feature in C and C++ called variable arguments. You can learn more details of this feature by reading the man pages for varargs. The macros found in varargs.h permit us to declare a function like this:
Ceval(semType rootSem, ...); // tree builder

in class Ceval. The "..." represents zero or more additional arguments that may be used in the call. It happens that there are several other competing Ceval constructors. This is the only one with a semType as the first argument, so the compiler can resolve it. The varargs scheme depends on all the parameters of the constructor function call being pushed in the stack in the call. A typical constructor call will look like this, written as a grammar code fragment:
E : E + T #PLUS { E.1 = new Ceval(PLUS, E.2, T, (Ceval*)0); }

The tag PLUS reappears in the generated code as a semType enumerated type. So this directs the compiler to choose this variable-arguments constructor for Ceval. The E and T are pointers to Ceval objects, so we pass each of these in the call. The parameter list is terminated with a 0 (NULL), which we cast to a Ceval* pointer, just to remind us of whats going on here. You can find other examples in the Qparser source code. Look at the function include in lib/sets.h and lib/sets.cpp, for example. This accepts a parameter count followed by a list of small integers that are to appended to a set. For examples of printf style functions, see lib/error.h and lib/error.cpp. Parameters in C and C++ function calls are pushed on the runtime stack in reverse order, the rightmost one first and the leftmost one last. (See appendix 2 regarding function calls for more details). That means that from within the function, the leftmost parameter will be at a known EBP+n location relative to the current stack frame. That will be the enumerated type st in the above function. The remaining parameters are located at runtime through the macros va_list, va_start, va_arg and va_end, as follows: va_list essentially defines a void* pointer args used to locate the passed parameters in the stack. va_start sets args to the address of the parameter above (toward higher addresses in the stack) st. We know the location of this parameter on the stack, but do not yet know its type (nor its size) Chapter 11: AST-based Code Generation, page 272

va_arg specifies the type of this parameter, in this case, Ceval*. It also increments args by the size of this type, and returns a cast to that type. This can all be done with legal C operations, although its messy and dangerous, so the C/C++ compiler issues no warnings on this operation. The type need not be a pointer; it could be anything. However, note that this program must know what types are on the stack the compiler cannot determine that. It happens that in Cexpr::Cexpr, weve permitted passing Ctoken* pointers as well as Ceval* pointers. Weve done this by using the classCode function to determine which object was passed. If the object is a Ceval, then we just use that pointer. If its a Ctoken, then we create a new pointer with that cast. This depends on the fact that we can call classCode as a virtual function, regardless of what the rest of the object contains. If the passed pointer is neither a Ceval* nor Ctoken*, an assert trap is raised. This provides some protection against foolish mistakes when using this constructor. Since va_arg increments the stack pointer, we can embed this in a while loop. When we discover (in this case) that the stack value is 0, rather than a non-NULL pointer, we stop reading stack values. va_end sets args to 0, effectively terminating this variable arguments operation.

This is a clever and powerful mechanism. It is in fact used in the C library to implement fprintf, printf, sprintf, scanf, sscanf and fscanf. (Use the man pages if you arent familiar with these functions). Theres also a special vsprintf function designed work with variable arguments when the first parameter is a char array with embedded formatting commands. The varargs mechanism is supported on any computer platform with a C compiler, not just the Intel platform; its part of the C standard. Although the underlying mechanism of these macros will depend on the platform, their implementation cannot. These macros are also supposed to survive any optimization of function calls, including inline calls, system calls, etc. However, with this power and flexibility, one pays a certain price: Type checking is essentially defeated for variable arguments. When the function call is written, one must be careful that the types passed are compatible with the function definition. Anything can be passed as a variable argument with no warnings from the compiler, but passing the wrong thing will result in garbage or a segmentation violation. Some way of determining the number of parameters, or the last parameter, must be provided. In this case, the last parameter must be 0, which is a good technique for passing pointers. In the case of printf, the number of parameters beyond the format string must agree with the number of formatting commands in the format string. You need to pay close attention to the size in bytes of the passed parameters. For example, an int may or may not have the same size (determined by the sizeof function) as a long int on some platform, and you are not supposed to know that. But passing an int to a function that expects a long int, or vice versa, is asking for trouble. Pointers are assumed to have the same size regardless of their base types. So you can pass a pointer to anything without upsetting the size requirement. But note that within the function, you are expected to assign each parameter to a particular type, which means that the function must be able to figure out the types of each of the parameters. You cannot just call another function from a varargs function, assuming that the variable arguments will line up in the second call. They wont line up. You must read all the parameters with the varargs macros, then pass them on as needed to a secondary function.

Chapter 11: AST-based Code Generation, page 273

Defeating Garbage Collection


Notice the call
currentParser->defeatGCOnce();

at the end of this constructor. As discussed near the end of chapter 10, the Qparser LR machine normally deletes all pointers found on the stack, replacing them by the pointer returned through an assignment to the left member of the production rule. An exception is a single production, in which case, the pointer on the stack is kept in place. The deletion strategy can be defeated globally, which would not be wise, or locally for this particular production rule. We must defeat garbage collection for this production rule on each constructor call that builds an AST, clearly, since otherwise we will have deleted objects in the AST that will later be required while walking the AST. The entire AST is easily deleted when we are finished with it with a single delete call to its root node. This is also why each node in an AST must be allocated from the heap, and incidentally why, each node in an AST must have the same type.

Building Tree Nodes


Lets return to our grammar, and see just how this variable-arguments constructor can be used to build a tree. Consider this production rule again:
Expr : Expr + Term #ADD { Expr= new Ceval(ADD, Expr.2, Term, CNULL); }

ADD Expr.2 tree Term tree

This will yield a portion of a tree that looks like the one on the left. The ADD node represents a Ceval object containing two pointers to child Ceval objects, one for Expr.2 and a second for Term. It contains the semType value ADD, which we noticed is the tag for this production rule.

Since this is a bottom-up construction process, the tree corresponding to the nonterminal Expr.2 will already have been built. The tree for Term will also have been built. These must always be Ceval objects. Either one may represent a simple token, or some operator tree. The constructor call creates a new Ceval object. Since Ceval is derived from Ctree, it inherits child and sibling pointers with which a new tree node can be constructed (see the C++ Primer, chapter 0, for more details about class Ctree). The new object carries a semType marker ADD, and two children, which are the previously-constructed Expr.2 and Term objects. The first parameter of this constructor must be a semType (the ADD parameter in the call). Any number (zero or more) of arguments may follow this one, but each one must be a pointer to a Ceval or Ctoken object, allocated from the heap. (These are the Expr.2 and Term parameters in the call). The last parameter in the constructor call, CNULL, is needed to mark the end of the list of children. CNULL is the NULL pointer, 0. The first argument, Expr.2, will become the leftmost child of this node. The remaining arguments

Chapter 11: AST-based Code Generation, page 274

are appended to the leftmost child along its sibling linked list, using the appendChild member function of the Ctree base class. Note that the pointers are copied, not the values. These pointers came from the LR parser stack, and were initially allocated from the heap. We clearly must be sure that these are not subjected to a delete until we are finished with the tree we are building. This constructor calls defeatGCOnce, which suppresses the parser's default deletion of any pointers found in the right member of a production rule. We need to do this since these pointers have been transferred to our tree structure and therefore need to remain alive for a while. Without this, the parser will delete the Expr.2 and Term objects, rendering our tree pointers to these invalid. Note that all Ceval objects brought into the tree must be allocated from the heap. Eventually, the tree will be deleted, and in the process all child nodes will be deleted. A non-heap object in the tree will likely cause a crash in an attempt to delete it.

Single Productions
The production rule
Expr : Term

requires no C++ code. What happens during tree building is that Term is associated with a pointer to some tree. The parser will simply copy this pointer to Expr, so that it effectively points to the same tree that Term did. This action is built into the parser code, requiring no action on your part. This is an example of a single production. Such production rules serve as a kind of transfer operation during parsing, and dont need to appear in the final AST.

Parentheses
The production rule
Primary : ( Expr ) #PARENS { Primary= Expr; }

also simply passes the Expr pointer to the Primary pointer. No PARENS node will show up in the AST. Here we do need to explicitly copy the pointer. The production rule is not a single production, and requires some special coding on our part. The purpose of parenthesizing expressions is to establish some special operator precedence. The precedence of the operators will be conveyed by the form of the AST, so we dont need to include a special node that designates a parenthesized expression. We only need to transfer the Expr pointer to Primary, which effectively causes any tree rooted in an Expr to become a tree rooted in Primary.

Identifiers
Primary : Identifier #VARIABLE { const string np= Identifier->getStringValue(); Csymbol cs;

Chapter 11: AST-based Code Generation, page 275

if (!symtab.findSymbol(np, cs)) syntaxError("undeclared identifier: %s", np); Primary= new Ceval(Identifier); }

Here, the Identifier is a leaf node. Its a good idea to see if the identifier is in the symbol table, as weve done here, and to complain about a syntax error otherwise. We check the identifier in the production rule immediately, so that any error associated with it will be reported at the appropriate place in the input source stream by syntaxError. If we waited until the tree is evaluated, the error message would point at the end of the expression, and not at the offending identifier. In any case, a new Ceval object will be constructed, based on the identifier, and it becomes the tree associated with Primary. The Ceval class contains a special constructor that knows how to convert a Ctoken object into a Ceval object. It happens that Ctoken objects do not derive from Ctree, and therefore lack a mechanism for being built into an ASTin other words, we can't just copy its pointer to the Primary object.

Numbers
Here is another leaf node, this time a literal integer. As with an identifier, we create a new Ceval object based on the integer, and this becomes the leaf node associated with Primary. As before, an Integer is a pointer to a Ctoken object, and a special Ceval constructor knows how to convert it into a Ceval object.
Primary : Integer #INTVAL { Primary= new Ceval(Integer); }

The Complete Grammar


Heres a complete grammar for building ASTs for assignment statements. We will discuss what to do with the tree later this will be done through an external function eval, which is called in the ASSIGN production rule. We also call function foldConst, which is designed to look for constant operations and fold them, as weve discussed above. We can do this on every AST created with some operator root node. It may or may not collapse the tree to a simpler form. Note that since the parsing is bottom-up, this will effectively fold a whole sequence of constant operations in a correct bottom-up fashion. We dont have to wait until a whole tree is built to do this. This is excerpted from file comp/comp.grm, with some details and production rules omitted. Some of its features are discussed in Italics.
Goal : Stmts EOF #QUIT

The following two production rules provide a sequence of Stmt forms terminated by semicolons.
Stmts : Stmts Stmt ; | Stmt ;

This is a C-style assignment statement. Cond derives a generalized expression form


Stmt { : Identifier = Cond #ASSIGN

Check the identifier against the symbol table as usual Chapter 11: AST-based Code Generation, page 276

const string np= Identifier->getStringValue(); semType st= (semType) Cond->getsemType(); Csymbol symp(np); if (!symtab.findSymbol(np, symp)) { // not there yet symtab.pushSymbol(np, symp); declare("dword", np); }

Create an ASSIGN node with Identifier as the left child and Cond as the right child.
Ceval* cp= new Ceval(ASSIGN, Identifier, Cond, CNULL);

See if we can optimize the tree by folding constant operations


cp= cp->foldConst();

Evaluate the tree, i.e. generate assembler for it. Its possible that the tree cp is empty. In that case, cp is not NULL, but its tag is OTHER.
cp->eval(); delete cp; } // release the tree now that we're done

This syntax lets us print strings and expression trees in Pascal style, e.g.. writeln(value a= , a);
| writeln ( PrintList ) #PRINT { PrintList->doPrint(); }

PrintList : PrintList , PrintItem #PLIST { PrintList.1= new Ceval(PLIST, PrintList.2, PrintItem, CNULL); } | PrintItem PrintItem : Cond | String #PSTRING { PrintItem= new Ceval(String); } // An if-then-else expression like that in C Cond : Boolean ? Cond : Cond #IFTHEN

{ This is where a tree node is created. This has the tag IFTHEN, and has three children Boolean, Cond.2 and Cond.3
Cond= new Ceval(IFTHEN, Boolean, Cond.2, Cond.3, CNULL);

See if this tree can be reduced.


Cond= Cond->foldConst(); } | Boolean Boolean : Expr Comparison Expr { #COMP

This builds a tree node with three children. The middle child, Comparison, is a trivial tree containing only a tag LT, LE, etc.
Boolean= new Ceval(COMP, Expr.1, Comparison, Expr.2, CNULL); Boolean= Boolean->foldConst(); }

Chapter 11: AST-based Code Generation, page 277

A Boolean can also be just an Expr tree.


| Expr

The idea in this section is to accept one comparison operator, such as "<=", converting it into a special tree node. The evaluator can then work out the necessary conditional branch instruction. In each case, we construct a trivial tree node with no children, but carrying a tag that represents the operator.
Comparison : < { Comparison= } | <= #LE { Comparison= } | > #GT { Comparison= } | >= #GE { Comparison= } | = #EQ { Comparison= } | <> #NE { Comparison= } #LT new Ceval(LT, CNULL);

new Ceval(LE, CNULL);

new Ceval(GT, CNULL);

new Ceval(GE, CNULL);

new Ceval(EQ, CNULL);

new Ceval(NE, CNULL);

This is how an arithmetic tree is built


Expr : Expr + Term #ADD {

As explained earlier, this constructs a tree node labeled ADD with two children.
Expr= new Ceval(ADD, Expr.2, Term, CNULL);

foldConst sees if this tree, whose root node is ADD, can be reduced in some way. For example, the two children might be constants, so the compiler can do the addition.
Expr= Expr->foldConst(); } | Expr - Term #SUB { Expr= new Ceval(SUB, Expr.2, Term, CNULL); Expr= Expr->foldConst(); } | Term Term : Term * Unary #MPY { Term= new Ceval(MPY, Term.2, Unary, CNULL); Term= Term->foldConst(); } | Term / Unary #DIVIDE { Term= Term= new Ceval(DIVIDE, Term.2, Unary, CNULL); Term= Term->foldConst();

Chapter 11: AST-based Code Generation, page 278

} | Unary Unary { : - Primary #UMINUS

The unary operator results in a tree with one child, Primary.


Unary= new Ceval(UMINUS, Primary, CNULL); Unary= Unary->foldConst(); } | Primary Primary : ( Cond ) #PARENS { Primary= Cond; } | Identifier #VARIABLE { const string np= Identifier->getStringValue(); Csymbol cs(np); if (!symtab.findSymbol(np, cs)) syntaxError("undeclared identifier: %s", np.c_str());

This builds a tree node containing the identifier as a string. The tag is drawn from the Ctoken object, which is IDENT.
Primary= new Ceval(Identifier); } | Integer #INTVAL {

This builds a tree node containing the integer. The tag will depend on the size of the integer, CHAR, UCHAR, SHORT, etc.
Primary= new Ceval(Integer); }

Tracing the Tree Construction


Suppose that we apply the above grammar to the expression
a*15 + 4*d

The LR parser will yield the following sequence of production rules in its REDUCE actions. Weve omitted all the single productions from the list, which contribute nothing to the tree construction:
1. 2. 3. 4. 5. 6. 7. Primary Primary Term Primary Primary Term Expr Identifier (a) Integer (15) Term * Unary #MPY Integer (4) Identifier (d) Term * Unary #MPY Expr + Term #ADD

(a * 15)

(4*d) (a*15 + 4*d)

Reduce action 1 creates a Ceval object for the identifier (a). Thats one tree, albeit a trivial one, consisting of a single Ceval object with no children. This tree, like all the others, will be temporarily rooted in the parser stack, associated with the nonterminal Primary. It will eventually be built into another tree by moving the pointer from the parser stack into a nodes child position. That will happen

Chapter 11: AST-based Code Generation, page 279

later -- in reduce action 3. Reduce action 2 creates a Ceval object for the integer (15). Thats a second trivial tree, as yet unlinked to the tree from step 1. We now have two pointers carried in the parser stack. Reduce action 3 links the trees from steps 1 and 2 to form a new tree, which will look like the following. This makes use of the variable-argument Ceval constructor discussed above. This reduce action will end by clearing the three right members from the parser stack. Before that happens, our semantics operations will have copied the two pointers to the a and the 15 into the child positions of the MPY tree.

MPY a 15

Reduce actions 4 and 5 create Ceval objects for the next integer (4) and identifier (d). Reduce action 6 links the trees from steps 4 and 5 to form another tree, as yet unconnected to the tree from step 3. We therefore have the following two trees (the two MPY nodes are carried in the parser stack):

MPY a

MPY
4
ADD

15

Reduce action 7 links the trees from step 6 by connecting them with an ADD operator, like this:

MPY a

MPY
4 d

15

This completes the tree construction. At this point, the Expr returned in the seventh action is associated with the ADD node at the trees root. Notice that all the nonterminal names in the grammar have disappeared. Its clear that, given a pointer to the root node of any of the intermediate trees, the rest of the tree structure can be inferred from each nodes child pointers. But what are the root nodes attached to? The answer is: the compile-time parser stack. On each LR REDUCE action, a pointer to the just-built tree is returned to the compiler stack, whence it is held until it later appears connected to a nonterminal in some production rule. For example, in reduce action 7 the stack contains a pointer associated with Expr.2 and with Term. The Expr.2 pointer is at index TOS-2 in the stack, and the Term pointer is at index TOS in the stack. The Ceval constructor creates a new node designed to carry these two pointers as child pointers. It also sets the left child pointer to Expr.2 and the right child pointer to Term. The new Ceval pointer will then replace both of these in the stack, effectively becoming a pointer associated with the production rules left member Expr.1.

Chapter 11: AST-based Code Generation, page 280

Expression Evaluation with Ceval


Lets now explore how to walk through an AST, looking for more optimizations and generating more efficient assembly code. Recall that each node in our tree is a Ceval object. Each of these inherits tree functions from Ctree, which permits us to access the child nodes through some simple function calls. We will call the member function eval on the root node of some tree that we wish to be optimized and ground into assembler.

The Ceval Class


The Ceval class is declared in file eval.h, in directory comp. Its derived from Csem, CsemType, and Ctree.
#define CEVAL 3 class Ceval : public Csem, public Csemtype, public Ctree { // these have meaning only for the specified semType union { // classified by semType: long int ivalue; // CHAR .. ULONG double dvalue; // FLOAT .. DOUBLE }; string svalue; // IDENT, STRING, CCODE static int label; void printItem(void); void doPrint1(void); public: Ceval(Ctoken *tok); Ceval(semType rootSem, ...); // tree builder virtual ~Ceval(void); void eval(void); Ceval* foldConst(void); virtual void dump(ostream& out); virtual int classCode(void) {return CEVAL;} static void initLabel(void) {label= 0;} static int newLabel(void) {return label++;} int isConst(void); int isSimple(void); long getInteger(void); void setInteger(semType semt, long value); double getDouble(void); void setDouble(semType semt, double value); const string getStringValue(void); void doPrint(void); Ceval* Ceval* Ceval* Ceval* }; toChild(int n= 0) {return (Ceval*) (Ctree::toChild(n));} one(void) {return toChild(0);} two(void) {return toChild(1);} three(void) {return toChild(2);}

Chapter 11: AST-based Code Generation, page 281

The data members ivalue, dvalue and svalue are used to carry literal values associated with numbers, identifiers and strings. The static variable label is used to manufacture new names for assembly labels. By declaring this static, we ensure that only one instance occurs. When newLabel is called, the current label value is returned, and the (single) label is incremented by one.

The Ceval Member Functions


Weve discussed the most interesting constructor with the variable arguments above. Most of the member functions of Ceval are designed to work through the completed tree, and perform the indicated operations. Here are descriptions of the key functions: Ceval* one(): this returns a pointer to the leftmost child of the current node. In walking the tree, we'll often call eval on one of these child nodes. This will return NULL if there are no children. This merely calls function Ctree::toChild, using a base class Ctree of Ceval. Ceval* two(): this returns a pointer to the second child of the current node. void eval(): evaluate the tree by emitting code. This should be called on an expression form. For example, it mght generate Pentium assembler code such that, when executed, the integer value of the tree will be left in EAX at runtime. Ceval* foldConst(): if this tree node is an operator, and both child nodes are constant, then this node will become a constant node, i.e. the constant evaluation will be done by the compiler, not by issuing runtime code. This usually requires changing a node from an operator to a constant, and deleting one or both child nodes. See the C++ code and the discussion given below for details on how this can be done safely. foldConst returns a pointer to a tree root, which is usually "this", but could be a new tree node. bool isConst(): if this tree node represents a runtime constant, return true, otherwise false. bool isSimple(): the idea of simple is that this tree node can be represented by a simple name or number. That means it can be dropped into a mov or add (etc.) instruction directly, rather than be evaluated through a sequence of instructions. That often saves unnecessary pushes and pops. So this returns true if the tree is simple and false otherwise. int getInteger(): retrieves the integer value of this tree. The tree must be a constant. void setInteger(int newval): if the tree represents an integer, sets the integer value. const string getStringValue(): returns a string representing the tree. This only works if the object is simple.

Expression Evaluation Ceval


We are now in a position to show how to generate efficient 32-bit Intel assembler code from our AST. This will become our first optimizing compiler. Later, well discuss how to introduce more types into expressions, and to generate efficient assembly code for them. The evaluation and constant folding code for this example is in the file qparser/comp/eval.cpp. The grammar is in qparser/comp/comp.grm, also given above. The complete grammar contains all four arithmetic operators, and in addition, negation, comparison, and an if-then-else expression, all operating on 32-bit integers. We will use a 16-bit integer to represent a boolean value for comparisons, i.e. 1 represents TRUE and 0 represents FALSE, as in the C language. A complete program is a sequence of assignment statements separated by semicolons. A special printing statement, the writeln, can also be used as a statement. Heres an example of a program accepted by the comp.grm grammar:
// comp2.in: a COMP input program

Chapter 11: AST-based Code Generation, page 282

//

illustrating various constant folding and other optimizations

a=5; b= 2+a+4+2*a-6; // no optimization for this writeln("a= ", a, ", b= ", b); // write a line showing a and b b= 2+4-6+(1+2)*a; // this is the same & optimizes... b= 3* -4; writeln("b= ", b); a= a= b= b= b= b= b= b= b= // a= a= a= a= a= a= // b= b= b= b= // b= b= b= 5*(8-6)/4; // overall constant folding a-(a-a); // should disappear completely, as a=a a*0; // is b=0 a*1; // is b=a a/1; // is b=a a/a; // is b=1 a+0; // is b=a a-0; // is b=a (a*a) - (a*a); // is b=0 incrementation cases a+1; a-1; 1+a; 1-a; a+5; 5+a; some comparisons, also folded 1>0; 0<1; (a-a) < (a-a); (a-b) < (a-b); using 5>3 ? 5>3 ? 5<3 ? the 1 : b : b : Boolean construct 2; // is b= 1 0; // should disappear 0; // is b= 0

// no optimization here b= a<b ? 3+5 : 4-22;

All the variables are integers, and need no declarations. Each variable becomes declared by appearing in the left member of an assignment statement. What will be interesting about this input program is the assembly code generated by our little compiler comp.exe. It is remarkably optimal, and (as well see) achieving this optimality is not very difficult. Most of the optimization intelligence is in file eval.cpp, which contains functions foldConst() and eval(). foldConst is called each time a new tree node is attached to existing trees. It looks for various ways of reducing the tree to simpler forms. eval is called on the completed assignment tree, and is primarily responsible for generating symbolic assembly code to evaluate the tree. It, too, looks for optimization opportunities. Several examples follow. Consider this assignment statement:
b= 5>3 ? b : 0;

Chapter 11: AST-based Code Generation, page 283

The right member is clearly a constant. Since 5>3 is true, it must evaluate to a true. The statement is therefore equivalent to
b= b;

which does nothing, and should result in no code. Constant folding is illustrated by this assignment:
a= 5*(8-6)/4;

This reduces to a single instruction:


mov a,2

since 5*(8-6) = 5*2= 10, and 10/4 is 2 in integer mode. Heres an expression in which no constant folding occurs, since theres never a simple binary operator tree with two constant children. The generated assembler code could be improved upon, but is otherwise better than with a one-pass compiler.
; 5: b= 2+a+4+2*a-6; // no optimization for this .DATA b SDWORD 0 .CODE mov EAX,a add EAX,2 add EAX,4 ; note that 2+a+4 is done in three instructions push EAX mov EAX,a imul EAX,2 pop EDX add EAX,EDX sub EAX,6 mov b,EAX

Because the constant operations were separated by non-constant operations, the obvious folding in the previous expression was not done by this version of eval. Heres an equivalent expression in which the constants are folded:
; 7: b= 2+4-6+(1+2)*a; mov EAX,a imul EAX,3 mov b,EAX

These assignment statements illustrate the removal of useless operations and useless assignments:
; 12: a= a-(a-a); ; 13: b= a*0; // mov b,0 ; 14: b= a*1; // mov EAX,a mov b,EAX ; 15: b= a/1; // mov EAX,a mov b,EAX ; 16: b= a/a; // mov b,1 ; 17: b= a+0; // mov EAX,a mov b,EAX ; 18: b= a-0; // mov EAX,a // should disappear completely, as a=a is b=0 is b=a

is b=a

is b=1 is b=a

is b=a

Chapter 11: AST-based Code Generation, page 284

mov b,EAX ; 19: b= (a*a) - (a*a); mov b,0

// is b=0

These illustrate some incrementation and decrementation optimizations (machine-dependent):


; 21: // incrementation cases ; 22: a= a+1; inc a ; 23: a= a-1; dec a ; 24: a= 1+a; inc a ; 25: a= 1-a; neg a inc a ; 26: a= a+5; add a,5 ; 27: a= 5+a; add a,5

A Survey of Function eval


Function eval (in file qparser/comp/eval.cpp) is designed to generate code for a complete assignment statement. It works in a top-down manner. Its first called when the production rule for an assignment statement is reduced (see the grammar rule for ASSIGN given above). Until this production rule is reduced, all the reduce operations are used to construct an AST, or reduce an AST by folding constants. We've seen that eval is called in just one place in our grammar--when an ASSIGN tree has just been completed. After the eval call, the tree is deleted. Function eval is mostly a large switch statement based on the node tag semt, which is part of the Csem object, and fetched by the function getsemType(). This node tag sorts out the various tree node cases, which we operate on in a recursive fashion.
void Ceval::eval(void) { // evaluate this subtree, leaving the result in EAX // This works several optimizations // And -- much of this is machine dependent // Evaluations are for 32-bit integers only, no floats switch (semt) { case OTHER: break; case ASSIGN: if (one()->isIdentical(two())) break; if (one()->increment(two())) break; if (two()->isConst()) { cout << " mov " << one()->getStringValue() << "," << two()->getStringValue() << endl; break; } // our optimizations failed... // so we do it the hard way...

Chapter 11: AST-based Code Generation, page 285

two()->eval(); cout << " mov break; case IDENT: cout << " break;

" << one()->getStringValue() << ",EAX" << endl;

mov

EAX," << getStringValue() << endl;

... etc. for all other cases: ADD, SUB, MPY, IFTHEN, etc. } }

The ASSIGN Case


For an assignment statement, which is at the root of our tree, we are directed to the ASSIGN case of the switch statement in eval:
case ASSIGN: if (one()->isIdentical(two())) break; if (one()->increment(two())) break; if (two()->isConst()) { cout << " mov " << one()->getStringValue() << "," << two()->getStringValue() << endl; break; } // our optimizations failed... // so we do it the hard way... two()->eval(); cout << " mov " << one()->getStringValue() << ",EAX" << endl; break;

We could do something really simple here. We could just write this:


two()eval(); cout << " mov EAX," << one()->getStringValue() << endl;

which will always be correct. But this ignores several optimizations, which are given in the first few lines of the above code. Note that the left member of the ASSIGN production rule will always be an identifier, but the right side could be any expression, ranging from a simple constant or identifier through something more complicated. Also note that before this production rule is seen, the right member expression has been subjected to a bottom-up constant folding operation during its construction. That means that this:
a= a +16-15;

will have been transformed into an AST that represents this:


a= a+1;

For optimizations, we look for three special cases before resorting to the long way: isIdentical checks whether the right member is an identifier equal to the left member. In fact, the function isIdentical can test any two trees for equivalence. If thats the case, no code need be generated. The second function, increment looks for any of the incrementation or decrementation cases, like these:

Chapter 11: AST-based Code Generation, page 286

k= k+1; k= 5+k; k= k-5;

These can be implemented through single instructions in the Intel architecture. Similar instructions are usually provided in other architectures. A third optimization (if (two()->isConst())) results if the right side of the assignment statement is a constant. We can then just write
mov eax,constant

If all of these optimizations fail, then the assignment is done the hard way, as described above.

Code Generation for an Addition


Addition starts with this production rule:
Expr : Expr + Term #ADD { Expr= new Ceval(ADD, Expr.2, Term, CNULL); Expr= Expr->foldConst(); }

When the AST is (eventually) evaluated, we will see a root node ADD with two children. The foldConst function will have reduced any constant additions, also all useless additions (+0), so we will never find both of the children to be a constant, and some assembler addition operation is clearly called for. In the most general case (the hard way), both children are trees to be evaluated. We then need to do this, for a single register strategy: call eval on the left child, which is supposed to leave a result in EAX, push EAX, call eval on the right child
pop EDX add EAX,EDX ; !!! FORBIDDEN !!!

Recall that the Intel instruction set forbids adding two memory variables, like this:
add a,b

So we need the register EAX, but this code produces unnecessary push and pop operations in many common cases, as weve discussed previously. For the Intel architecture, we can take advantage of the add instruction features that permit us to write
add EAX,something

where something could be a constant or a simple identifier. Addition is also commutative, so we can exploit this form whether the something appears as a right child or a left child. That leads us to design a function isSimple(), which tests a node for the simple quality, i.e. a constant or an identifier. Our optimal addition code now looks like this in eval.cpp:
case ADD: if (one()->isSimple()) {

The left child is simple


two()->eval(); cout << " add EAX," << one()->getStringValue() << endl; } else if (two()->isSimple()) {

Chapter 11: AST-based Code Generation, page 287

The right child is simple


one()->eval(); cout << " add } else { EAX," << two()->getStringValue() << endl;

We have to do it the hard way, saving the left child value in the stack temporarily
one()->eval(); cout << " push EAX" << endl; two()->eval(); cout << " pop EDX" << endl; cout << " add EAX,EDX" << endl; } break;

Code Generation for an Subtraction


Subtraction is a bit more difficult on the Intel to optimize, because theres no reverse subtract operator. This means that the expression
k*3-15

can be written as
get k*3 into EAX sub eax,15

but
15-k*3

cant be written as
get k*3 into EAX sub eax,15

We need to add the instruction


neg eax

to make it come out right. Bearing that in mind, heres how the eval code looks:
case SUB: if (one()->isSimple()) { two()->eval();

This case is begging for a reverse subtraction operator!


cout << " sub EAX," << one()->getStringValue() << endl; cout << " neg EAX" << endl; } else if (two()->isSimple()) {

This case is like the addition optimization. Note that we dont need to generate a neg instruction
one()->eval(); cout << " sub } else { // the hard way one()->eval(); cout << " push two()->eval(); cout << " pop cout << " sub } break; EAX," << two()->getStringValue() << endl;

but no neg instruction is required EAX" << endl; EDX" << endl; EAX,EDX" << endl;

Chapter 11: AST-based Code Generation, page 288

Multiplication
Multiplication is almost exactly like addition. The imul instruction has the same form as the add instruction and is subject to the same optimizations.

Division
For division, the dividend must be in EDX:EAX before the idiv instruction is issued. Since EAX carries the dividend, and is a signed number, instruction cdq must be issued to adjust EDX. cdq will extend the sign bit of EAX into EDX, preparing EDX:EAX properly for our signed division operation. The quotient will end up in EAX, where we wish it to be, and the remainder in EDX, which will be ignored. There are no shortcuts. We can test for division by a constant 0, and issue an error in that case. If the divisor is variable, we can also emit runtime code to test for zero. This is a protection mechanism that might be eliminated for high performance through a compiler option. Heres what that eval code looks like:
case DIVIDE: one()->eval(); // numerator to EAX cout << " cdq" << endl; // expand EAX into EDX:EAX if (two()->isConst()) { // two could be a constant, one can't be long rvalue= two()->getInteger(); if (rvalue == 0) { cout << "; ** division by zero!" << endl; rvalue= 1; } cout << " idiv ECX," << rvalue << endl; // don't bother with test } else { // runtime test for division by 0 if (two()->isSimple()) cout << " mov ECX," << two()->getStringValue() << endl; else { cout << " push EAX" << endl; two()->eval(); cout << " pop ECX" << endl; } cout << " cmp ECX,0" << endl; // test for 0 divide int labelValue= newLabel(); cout << " jz $LBL_" << labelValue << endl; cout << " idiv ECX" << endl; cout << "$LBL_" << labelValue << ":" << endl; } break;

We might also have 0 divided by a non-zero value. The result is 0, unless the divisor is also zero. This case is reduced as part of the constant folding operator, described later.

The Comparison Operator


The comparison operators (<, <=, >, >=, =, <>) appear in two related production rules. Here's one:
Boolean : Expr Comparison Expr #COMP { Boolean= new Ceval(COMP, Expr.1, Comparison, Expr.2, CNULL); Boolean= Boolean->foldConst(); } | Expr

Chapter 11: AST-based Code Generation, page 289

The Comparison nonterminal is defined by six rules like these:


Comparison : { Comparison= new Ceval(LT, CNULL); } | { Comparison= new Ceval(LE, CNULL); } | (etc.) <= #LE < #LT

Clearly, the specific operator will be represented in the COMP tree as the second child. (The first child is the leftmost expression and the third child is the rightmost expression). The second child will be a tree node with no children, but its tag will be LT, LE, EQ, NE, GT, or GE. In this simple grammar, the whole purpose of doing this comparison is to decide which of two expressions should be evaluated. For that, we can generate a cmp instruction, then an appropriate conditional jump. (Consult appendix 2 if you are unclear on this assembler concept). Consider this example expression:
a<b ? c : d

Then the assembly code for this conditional should look something like this:
mov cmp jge mov jmp L1: mov L2: EAX,a EAX,b L1 EAX,c L2 EAX,d ; assumes that a is simple ; this assumes that b is simple ; load c, assuming it is simple

; load d

As with addtion, the Intel instruction cmp forbids comparing two memory operands. We can, however, compare a memory operand with a literal constant with cmp. Generating an optimal cmp instruction is much like generating an optimal add. We should check both the left and right sides for the simple condition. Comparing k to 5, i.e. k<5 can be done with
cmp k,5

We couldnt do that with add or sub, because they would change the value of k. However, cmp doesnt change k, so this form is acceptable. We can also legally write
cmp 5,k

We can also assume that foldConst will have reduced all the trivial cases, in which: the Boolean is constant, or the two expressions in the comparison are identical or constant, or the two rightmost expressions of the IFTHEN are identical. That implies that our eval case is particularly simple to encode, as follows:
case IFTHEN: { // the boolean is not a constant if here // We need two labels to generate a branching evaluation /* The tree looks like this: IFTHEN

Chapter 11: AST-based Code Generation, page 290

/ bool */

| expr1

\ expr2

Generate two labels needed for the code


int elseLabel= newLabel(); int endLabel= newLabel();

branchEval is described below. This is the heart of the optimization


one()->branchEval(elseLabel); two()->eval(); // code for THEN part cout << " jmp $LBL_" << endLabel << endl; cout << "$LBL_" << elseLabel << ":" << endl; three()->eval(); // code for ELSE part cout << "$LBL_" << endLabel << ":" << endl; } break;

The branchEval Function


This function is supposed to generate a comparison from its COMP tree such that a true will fall through and a false will branch to elseLabel. Heres what that function looks like:
void Ceval::branchEval(int elseLabel) const { /* 'this' is a Boolean comparison operator tree, i.e. LT, LE, etc. like this... COMP / | \ expr < expr */ assert(semt==COMP);

// a precondition

This part is very much like the sub code described earlier, except that cmp const,EAX is legal.
if (one()->isSimple()) { three()->eval(); cout << " cmp " << one()->getStringValue() << ",EAX" << endl; } else if (three()->isSimple()) { one()->eval(); cout << " cmp EAX," << three()->getStringValue() << endl; } else { // have to do this the hard way... one()->eval(); // form the left side of the comparison cout << " push EAX" << endl; three()->eval(); // then the right side cout << " pop EBX" << endl; cout << " cmp EBX,EAX" << endl; }

The genCompare function generates an appropriate conditional jump instruction, for example jge label. Its described next.
genCompare(elseLabel); }

Chapter 11: AST-based Code Generation, page 291

Here is function genCompare. It, too, makes use of the COMP tree structure carried by this Ceval object.
void Ceval::genCompare(int elseLabel) const { // note that the conditional jumps are the inverse of the // comparison because we want to branch to the "else" label // when the comparison fails cout << " "; switch (two()->semt) { case LE: cout << "jg "; break; case LT: cout << "jge"; break; case EQ: cout << "jne"; break; case NE: cout << "je "; break; case GE: cout << "jlt"; break; case GT: cout << "jle"; break; } cout << " $LBL_" << elseLabel << endl; }

What does all this do for us? Here's a sample:


c= a<b ? 3+5 : 4-22;

produces this assembler code:


.DATA c SDWORD .CODE 0

Comparing: a<b
mov cmp jge EAX,b a,EAX $LBL_0

Here's the true case, 3+5= 8


mov EAX,8 jmp $LBL_1 $LBL_0:

Here's the false case, 4-22 = -18


mov EAX,-18 $LBL_1:

This is where c is set to either 8 or -18.


mov c,EAX

Chapter 11: AST-based Code Generation, page 292

Constant Folding
Since this is a bottom-up parser and tree builder, it makes sense to fold constants as the tree is constructed. We explore how thats done in this section. We will call function foldConst through the grammar semantics just after an expression tree node is constructed. The foldConst operation will do nothing to the tree (returning a pointer to its root) if no folding is possible. If an optimization is possible, this will mutate the tree, and return a pointer to the new trees root. (It will usually be the same tree node, but sometimes a new one). We will expect foldConst to look for trivial assignment forms and eliminate them, as well as operations on constants.

Folding Binary Arithmetic


The ADD production rule contains these semantics:
Expr : Expr + Term #ADD { Expr= new Ceval(ADD, Expr.2, Term, CNULL); Expr->foldConst(); }

The task for foldConst is to inspect the ADD tree and see if it can be reduced in a machineindependent manner. The possibilities are: o folding two constant children into a constant result, o folding expr+0 and 0+expr into expr Here's what that compiler code looks like, in function eval():
case ADD: if (one()->isConst() && two()->isConst()) { makeConstantNode(LONGINT, one()->getInteger() + two()->getInteger()); } else if (one()->isConst() && one()->getValue() == 0) { root= two()->unlink(); delete this; } else if (two()->isConst() && two()->getValue() == 0) { root= one()->unlink(); delete this; } break;

When both children are constant, we want to replace this ADD node with a constant node. Thats done in function makeConstantNode, given below:
void Ceval::makeConstantNode(semType st, long value) { // make the current tree node into a constant // carrying 'value'. semt must be a constant type Csem::semt= st; ivalue= value; deleteAll(); // delete all children }

Well use this function in lots of places, which is why it should be implemented as a function.

Chapter 11: AST-based Code Generation, page 293

Changing a Ceval object from one kind to another is done by changing its semt flag. In this case, the semt was previously ADD. By changing it to LONGINT, we mark this object as a constant. The ivalue field must be set to value. After this node is changed from an ADD node (with two children) to a constant node (with no children), we delete the two child nodes through deleteAll. When one of the children is the constant 0, we can eliminate that constant and the operator by just returning the tree which is the remaining child. In the first case, child one is zero:
} else if (one()->isConst() && one()->getValue() == 0) { root= two()->unlink(); delete this; }

We unlink child two, returning it from this foldConst call, then delete this. Whenever you delete this, you are not affecting this objects code, but we must not subsequently access this objects data members, since they become invalid. You will notice by examining the code for foldConst, that the next operation is returning from this function call. The returned pointer is in the local variable root, and is used in the production rules semantics as the tree root. This is a tricky bit of C++ that in fact is safe, provided that you follow our guidelines and be bold about it!

Folding Comparison Operations


Weve discussed how the conditional expression
bool ? expr1 : expr2

can be reduced to assembly code in a reasonably optimal way. Before thats done, foldConst will have processed the bool and both expressions. It should now look for one of these cases: o bool may be constant, in which case, either expr1 or expr2 can be selected, eliminating the rest of this tree. o expr1 may be identical to expr2, in which case, either one can be chosen regardless of the bool. The first case, "bool may be constant" is handled in foldConst like this. The IFTHEN tree has bool as the first child, expr1 as the second child and expr2 as the third child.
case IFTHEN: // Cond : Boolean ? Cond : Cond if (one()->isConst()) { // With Boolean constant, we select one of the // two Cond trees as "this" tree int v= one()->getInteger(); if (v == 0) { root= (Ceval*) three()->unlink(); } else { root= (Ceval*) two()->unlink(); } delete this; } else if (two()->isIdentical(three())) { root= (Ceval*) two()->unlink(); delete this; } break;

The first test is to see if the bool is a constant. If so, we can simply select one of the two expressions.

Chapter 11: AST-based Code Generation, page 294

The corresponding AST operations amount to unlinking the second or third child tree, returning its root node pointer, and deleting this. The second test is whether the two expressions are identical. If they are, there's no point in evaluating the conditional at all. We just return one of the child node pointers, being careful to delete the other one, since it will be disconnected from our main tree.

Folding a Boolean
This simple grammar only supports one kind of Boolean:
x comparison y

where comparison is one of the six Pascal comparison operators <, >, <=, >=, =, <>. x and y are arbitrary arithmetic expressions that return an integer value. We can reduce this to a constant if both are constants, or if x is identical to y The corresponding foldConst code found in eval.cpp is this:
case COMP: // Boolean : Expr Comparison Expr

See if the two expressions are constant. If so, we can replace this tree with a constant 1 or 0, depending on the comparison operator and the two constants.
if (one()->isConst() && three()->isConst()) { long lvalue= one()->getInteger(); long rvalue= three()->getInteger(); switch (two()->semt) { case LE:

This is the <= case. Function makeConstantNode will replace this tree with a constant whose semt is CHAR, and whose value is 0 or 1.
makeConstantNode(CHAR, break; case LT: makeConstantNode(CHAR, break; case EQ: makeConstantNode(CHAR, break; case NE: makeConstantNode(CHAR, break; case GE: makeConstantNode(CHAR, break; case GT: makeConstantNode(CHAR, break; (long)(lvalue <= rvalue));

(long)(lvalue < rvalue));

(long)(lvalue == rvalue));

(long)(lvalue != rvalue));

(long)(lvalue >= rvalue));

(long)(lvalue > rvalue));

} } else if (one()->isIdentical(three())) {

Here, the two expressions are identical. Then the <=, = and >= case all yield 1, while the other comparisons yield 0. We again replace this node with a constant 0 or 1.
// comparing N to N yields 0 or 1 switch (two()->semt) { case LE: case EQ:

Chapter 11: AST-based Code Generation, page 295

case GE: makeConstantNode(CHAR, 1L); break; case LT: case NE: case GT: makeConstantNode(CHAR, 0L); break; } } break;

Integer Compiler Optimization Results


Our tiny compiler qparser\comp\comp.exe yields the following assembly for the given statements. Some comments are given next to the assignment statement source code.
include aservice.asm .STACK 50000 ; reserve stack space .CODE PUBLIC _pasMain _pasMain PROC NEAR32 ; 1: // comp2.in: a COMP input program ; 2: // illustrating various constant folding and other optimizations ; 3: ; 4: a=5; .DATA a SDWORD 0 .CODE mov a,5 ; 5: b= 2+4-6+(1+2)*a; // some optimization here .DATA b SDWORD 0 .CODE

line 5 is equivalent to b=3*a, since 2+4-6 = 0.


mov EAX,a imul EAX,3 mov b,EAX ; 6: ; 7: b= 2+a+4+2*a-6;

// no optimization for this

This is the same expression, but the 2+4-6 is not recognized because its split up by non-constant expressions.
mov add add push mov imul pop add sub mov ; 8: EAX,a EAX,2 EAX,4 EAX EAX,a EAX,2 EDX EAX,EDX EAX,6 b,EAX

The following assignment disappears, since the right member is a, identical to the left member
; 9: a= a-(a-a);

Chapter 11: AST-based Code Generation, page 296

Here are some obvious arithmetic reductions


; 10: mov ; 11: mov mov ; 12: mov mov ; 13: mov ; 14: mov mov ; 15: mov mov ; 16: mov ; 17: ; 18: ; 19: inc ; 20: dec ; 21: inc ; 22: neg inc ; 23: add ; 24: add ; 25: ; 26: b= a*0; b,0 b= a*1; EAX,a b,EAX b= a/1; EAX,a b,EAX b= a/a; b,1 b= a+0; EAX,a b,EAX b= a-0; EAX,a b,EAX b= (a*a) b,0 // is b=0 // is b=a

// is b=a

// is b=1 // is b=a

// is b=a

- (a*a);

// is b=0

Some incrementation and decrementation cases


a= a+1; a a= a-1; a a= 1+a; a a= 1-a; a a a= a+5; a,5 a= 5+a; a,5

Some comparisons, also folded


; 27: b= 1>0; // is TRUE, or 1 mov b,1 ; 28: b= 0>1; // is FALSE, or 0 mov b,0 ; 29: b= (a-a) < (a-a); // becomes b= 0<0; mov b,0 ; 30: b= (a-b) < (a-b); // x < x becomes FALSE mov b,0 ; 31: ; 32: // the conditional construct ; 33: b= a<b ? 3+5 : 4-22; // is b= a<b ? 8 : -18; mov EAX,b cmp a,EAX jge $LBL_0 mov EAX,8 jmp $LBL_1 $LBL_0: mov EAX,-18

Chapter 11: AST-based Code Generation, page 297

$LBL_1: mov b,EAX ; 34: b= 5>3 ? mov b,1 ; 35: b= 5>3 ? ; 36: b= a<b ? mov b,8 ; 37: b= 5<3 ? mov b,0 ; 38: ret _pasMain ENDP END

1 : 2;

// is b= 1

b : 0; // should disappear 3+5 : 1+1+9-3; // is b= 8; b : 0; // is b= 0

; return

Other Optimizations
The above assembler suggests several more optimizations that are not provided in comp.grm, as follows: The statement b= 2+a+4+2*a-6; could have been folded if the sequence of additions were reordered. The general idea is to order a sequence of + or * operators (like this one) so that constants appear first, then variables. By placing the reordered sequence back into tree form, foldConst would then recognize at least the constant portions (2+4-6) as something to fold. Recognizing a + 2*a as something that could be rewritten as (1+2)*a, and therefore foldable, requires more algebraic analysis, here to look for the common factor a. By collecting a sequence of these assignment statements into an AST, many of the assignment statements could be dropped because the resulting variable change is overridden in a following assignment. This requires an analysis of variable liveness, which we will discuss in chapter 14.

Mixed Mode Arithmetic Optimization and the FPU


The simple compiler found in qparser\comp provides good optimizations of assignment statements, but only for 32-bit integer arithmetic. We now examine some of the consequences of supporting floats as well as integers. Most languages support several different variable types, e.g. floats, integers, strings, sets, etc. Some of these may come in different precisions and lengths. Lets look at how floats and ints can be combined in the same expression in a way that facilitates optimization. The Pascal rules for combining two mixed-mode operands are these: for +, - and *, if both operands are float, then the operation is done in floating-point mode, for +, - and *, if both operands are int, then the operation is done in integer mode, for +, - and *, if one is float and the other int, then the int is converted to float, and the operation done in float, for DIV, the operands must both be int, and the integer quotient is returned (the remainder is discarded), for /, the division is done in float, converting operands as required. for MOD, the operands must both be int, and the integer remainder of a division is returned. 32-bit integer arithmetic can be done with the CPU instructions as weve seen. It can also be done to full precision in the Intel floating-point unit (FPU). The CPU is generally preferred for performance Chapter 11: AST-based Code Generation, page 298

reasons, since the FPU requires that each operand be loaded from memory, while the CPU can operate from registers as well as memory. The FPU also must convert an integer into its internal 80-bit floating form, while the CPU arithmetic unit operates directly on integers. There is therefore some performance penalty for carrying out integer arithmetic in the FPU, although not as much as one might expect, for these reasons: Loading from memory is not necessarily expensive, given that memory is cached in the Pentium processors. This means that a commonly used memory cell may in fact be in a hidden processor cache register. The FPU is on the same chip as the CPU in the Pentium. This avoids the extra delays involved in external bus access. Advanced versions of the Pentium may in the future provide instructions for loading/storing floats between the FPU and the CPU registers. These do not now exist. The FPU operations are driven by low-level microcode, and use many fast shortcuts. Converting a 32-bit integer to floating-point form and carrying out the floating-point arithmetic in fact is very fast.

Loading and Storing Values


The FPU supports a stack of eight floating-point registers, named ST(0), ST(1), ..., ST(7). (See appendix 3 for more details). Any of these can be loaded from memory like this:
fld st(5),memloc

A memory value can also be pushed into the stack like this:
fld memloc

This causes st(7) to be replaced by st(6), st(6) by st(5), etc., and st(0) replaced by memloc. One drawback to fld and fild is that they cannot accept a constant operand. To load a constant, that constant must be in memory, and have a memory address. (Note that a constant can be in code memory, provided that it not interfere with an instruction sequence). So, loading a constant calls for code like this:
.data c1 dd 1456 .code fild c1 ; an integer constant

.data c2` dd 3.14159 ; a floating-point constant .code fld c2

Integer and float constants must be either 32-bit or 64-bit precision. Masm determines the precision by noticing the dd for 32-bit precision and dq for 64-bit precision, and generates the appropriate binary form for fild or fld, since these also must know which precision to load. A floating-point value is stored in memory with fstp (store a float) or fistp (store an integer). The p here refers to the stack pop that occurs after the store: fst and fist store the number without popping the FPU stack. As with fld and fild, the value is stored in 32-bit or 64-bit precision, depending on how the operand was declared, as follows:
.data d1 dd ? ; a 32-bit bit element

Chapter 11: AST-based Code Generation, page 299

q1 dq ? .code fstp d1 fistp d1 fstp q1 fistp q1

; a 64-bit bit element ; ; ; ; stored stored stored stored as as as as a a a a 32-bit 32-bit 64-bit 64-bit float integer float integer

Storing an integer from an internal floating-point form implies that any fractional part will be lost. The FPU can be configured to either truncate the remainder from above (ceiling) or below (floor), or to round the result by setting the RC field of the FPU control word flags. The four binary arithmetic operations (fadd, fsub, fmul, fdiv) can take 0, 1 or 2 arguments. With no argument, the top two stack elements are operated upon, and are replaced (in stack fashion) by the result, like this:
fsub ; st(1)= st(0) st(1), then POP the stack

Note that if a value x is pushed in the stack, then y, fsub will correctly compute x-y. With one argument m, the operation is done as m op st(0), and the result replaces st(0), like this:
fsub m ; st(0) = st(0) - m ; st(0) = st(0) st(i) ; st(i) = st(i) st(0)

With two arguments, the arguments must be in the stack, and one must be st(0), like one of these:
fsub st(0),st(i) fsub st(i),st(0)

These different forms offer some opportunities for optimization, which we will not explore. Since the no-operand form behaves exactly like a postfix evaluator, its particularly easy to use in a compiler framework with expressions. Little performance improvement results from using the extended forms, so we wont.

Floating Add of two Floating-point Variables


The most common operation is adding, subtracting, multiplying or dividing two operands. All of these follow the same pattern in the FPU push the two operands (if they are not already in the stack), then execute the appropriate operation. The result is popping the stack twice, and pushing the result. Heres an example:
; 209: r2 fld fld fadd fstp := r1+r3; R1_010 R3_012 R2_022 ; ; ; ; read and push r1 into the FPU stack read and push r3 into the FPU stack add the two with two pops and a push store result in r2, and pop the FPU stack

Floating Add Instruction Sequence with Integer Conversion


A real variable is to be added to an integer variable, and the result copied to a memory location carrying a real number:
; 210: r2 fld fild fadd fstp := r1+i1; R1_010 I1_008 R2_011 ; ; ; ; read and push r1 into the FPU stack fild can load a 32-bit number from memory perform the floating addition on FPU stack store the result in r2, and pop the FPU stack

From a compilers perspective, the expression r1+i1 needs to be recognized for what it is. Using an AST in which all nodes are tagged with a type {float, int} aids compilation. The reverse code looks like this:
; 211: r2 := i1+r1;

Chapter 11: AST-based Code Generation, page 300

fild fld fadd fstp

I1_008 R1_010 R2_011

Mixed-mode Arithmetic
Since Pascal requires two integer operands to be done in integer mode, we need to decide whether that part should be done with CPU or FPU instructions. Either way achieves the same result, except for division. Integer division in the FPU will result in a fractional part, which should be truncated. We can truncate st(0) with the instruction
frndint

This will round or truncate st(0), according to the setting of the RC field of the flags register. Lets assume that this field is set to correctly truncate a fraction by Pascal rules. Heres an example of working with mixed-mode arithmetic following these guidelines:
; 212: r2= i1 div i2 r3; fild I1_008 fild I2_009 fdiv frndint ; truncate to an integer fld R3_012 fsub fstp R2_011

An alternative approach is to carry out all integer operations in the CPU. This requires transferring an integer into the FPU at some point, which requires a temporary memory register. Heres how the above operation would look using that strategy:
.data RTMP dq ? .code ; 212: r2= i1 div i2 r3; mov eax,I1_008 cdq idiv I2_009 mov RTMP,eax fild RTMP fld R3_012 fsub fstp R2_011

Transferring a Floating Constant to a Memory Location


For example:
r2 := 5;

This operation does not require any FPU services. The compiler can convert the literal value 5 into its floating-point equivalent
040a00000h

This is just a double word and can be transferred to variable r2 with a 32-bit mov instruction, like this:
; 135: r2 := 5;

Chapter 11: AST-based Code Generation, page 301

mov

R2_011,040a00000h

Note that the compiler must know how to convert numbers into the supported floating-point format for its assembler. (Somethings wrong if it doesnt!)

Example of Floating-point Code


Here's an assembler output listing of code using a variety of optimizations, including constant folding, some assignment statement optimization, elimination of idempotent operations (0*m, 1*m, 0+m, etc.), mixed type conversions and an intelligent stacking policy. These are for integers, reals and booleans, and use the tree-walking model described above. This example was generated by a more sophisticated expression evaluator that handles integer, floating-point and Boolean types, the compiler found in qparser\pascal5. It contains excerpts from the assembler generated from pasprogs/t1.pas. Program t1.pas was written to test the arithmetic operations for the compiler, as optimized, and contains a large number of test function calls. We've excised most of the t1.pas operations to show how it deals with float vs. integer arithmetic.
; 1: program t1; ; Pascal program T1 include aservice.asm ; 2: var c1, c2, c3: char; ; 3: i0 : integer; ; 4: i1, i2, i3: integer; ; 5: r1, r2, r3: real; ; 6: b1, b2, b3: boolean; ; 24: begin public _pasMain _pasMain proc near push stlink+4 push ebp mov ebp,esp push ebx

Constant folding with real arithmetic: 5/3 is 1.666667


; 25: r3 := 5/3; mov R3_047,03fd55555h

Here we have a float variable divided by what appears to be an integer constant. The operation must be done in the FPU in any case
; 26: r3 := r3/5; fld R3_047

A constant (5) is introduced as a floating-point datum just once. Here it's given the label L5. Any subsequent appearance of constant 5 becomes a reference to L5. An alternative strategy would be to declare the constant as an integer, then use fild to load it. One way is probably as fast as the other.
.DATA L5 dd .CODE fld fdiv fstp 040a00000h L5 R3_047

Simple copies of reals needn't involve the FPU at all


; 293: r1 := 0.333; R1_045,03eaa7efah

The compiler worked out (hex) 3eaa7efa as the float equivalent of 0.333
mov

Chapter 11: AST-based Code Generation, page 302

; 294:

r2 := 5; R2_046,040a00000h

This is what 5 looks like as a float


mov

A simple assignment can be done with mov or floats about equally fast
; 295: r3 := r1; fld R1_045 fstp R3_047

Heres some arithmetic. Notice that 5 is recognized as in memory at location L5.


; 298: r3 fld fld fadd fstp := 5+r1; L5 R1_045 R3_047

Idempotents work for real numbers, too: the zero disappears. We could also have done this with the CPU, which might save several machine cycles.
; 368: r2 := r1 - 0; fld R1_045 fstp R2_046

Here's what a sign change looks like


; 370: r2 := 0 - r1; fld R1_045 fchs fstp R2_046

An increment isn't so easy in floating point. We have to generate an fadd instruction with a constant 1.0
; 416: r2 fld .DATA L112 dd .CODE fld fadd fstp ; 428: := r2 + 1; R2_046 03f800000h L112 R2_046

Loading and converting a char to a float


r2 := c1; al,C1_038 eax,255 rtmp,eax rtmp R2_046

Since C1 uses only one byte in memory, we use a mov al to access it.
mov and mov fild fstp

This and clears the high-order 24 bits of EAX EAX is copied to a temporary memory location ...then fild used to get the character into the the FPU Loading and converting an int to a float. This isn't completely optimized. fild could be used to load i1 directly.
; 455: r2 mov mov fild fld fsub fstp := i1-r1; eax,I1_042 rtmp,eax rtmp R1_045 R2_046

Chapter 11: AST-based Code Generation, page 303

; 519: end.

that's all, folks!


mov esp,ebp pop ebp pop stlink+4 ret _pasMain endp

Register Allocation within an AST


Somewhat better code could be generated within an AST by taking advantage of the complete set of general-purpose registers available in a typical processor. For example, consider this assignment statement:
x := a*b + b*(c+d);

The pascal5 compiler generates this code: mov eax,D add eax,C imul B push eax mov eax,B imul A pop edx add eax,edx mov X,eax The push and pop would clearly be unnecessary if we used another register to hold a*b while we worked on the other component of this expression, viz: mov eax,D add eax,C imul B mov ebx,eax ; use another register instead of a push mov eax,B imul A add eax,ebx mov X,eax

Optimal AST Evaluation for a Multiregister Machine


An algorithm due to Meyers [1], Nakata [2] and analyzed by Sethi and Ullman [3] generates optimal code from a binary expression tree for a multiregister machine. By a multiregister machine, we mean one in which the operations are of the form
op R1,M,R2 op R1,R2,R3 load M,R stor M,R ; ; ; ; R2 R3 R= M= = R1 op M = R1 op R2 M R

where M is some memory location and R1, R2, etc. are machine registers. Notice that the operator op R1,M,R2 does not necessarily have a counterpart op M,R1,R2. Thus the machine may have a subtraction operator R-M, but no reverse subtraction M-R. This model clearly does not fit the Intel binary CPU or FPU architecture. However, it works well for

Chapter 11: AST-based Code Generation, page 304

most RISC platforms, the IBM 360 family and the Motorola 68000 family. We start by assuming that the trees value will be left in a register. Its clear that some trees will require no stor operations -- most will not, in fact, if the number of registers is fairly large. The problem essentially is to identify those leaf nodes that require a load, those that must be stored, and a suitable order of performing those operations. Every node in the tree will require one op regardless of the load/stor operations, so that a minimal solution is one with the fewest load and stor operations. The first operation is tree labeling an algorithm that decorates each tree node N with the number of registers needed to evaluate the subtree rooted in N without any stor instructions. We define a function that works out the register count for any given node, treecount. Heres what that looks like, in C-style pseudo-code:
int treecount(node* n) // n is some tree node; this returns a register count for n { if (leaf(n)) // n has no children { if (n != root) { // root is the AST tree root node return (n == leftchild(parent(n)) ? 1 else 0) else return 1; } else { // n has children here int t1, t2; // temporary counts t1= treecount(leftchild(n)); t2= treecount(rightchild(n)); return (t1 == t2) ? t1+1 : max(t1, t2); } }

Function treecount is a bottom-up algorithm. The count for some node can only be determined if it is a leaf node or if the labels for its children are worked out. Thus in the non-leaf case, the function is called on the two children. We could clearly develop this count during a bottom-up parse, with some slight modifications. Suppose that a node n is a leaf node. If its a left leaf, we need one register to carry its value in order to perform the parent operation. If a right leaf, its parent operation can be performed from memory, requiring no register. Next, suppose that node n is an internal node. Let t1 and t2 be the counts for the left and right children, respectively. If these are the same, then that many registers are needed for each of the subtrees. Note that the count for each subtree is at least 1. We need one more register to carry out the operation in this case, to hold the value of the left subtree while the right subtree is being evaluated, or vice versa, hence the count for node n should be t1+1. If these are not the same, at least one register is available for one of the subtrees (the one with the smaller count), but no additional registers are required, hence the count for n is max(t1, t2). Now that the AST nodes are decorated with node counts, we can write an optimal code generation algorithm, function evaluate. This is a top-down algorithm, similar to those examined above, except that it generates multi-register instructions of the form given above.

Chapter 11: AST-based Code Generation, page 305

It requires a few support functions, as follows: emitload(m, r): emits a load m,r instruction emitstor(m, r): emits a stor m,r instruction emitop(op, r1, s, r2): emits op r1,s,r2, as defined above exchange(n1, n2): interchanges the nodes n1, n2. op(n): this is the operator associated with node n. n must be an internal node. count(n): this is the register count associated with node n. memloc(n): the memory location associated with node n. n must be a leaf node. allocate(): returns a temporary memory location. Temporaries are required when we run out of registers. release(t): returns a temporary location t to a pool. Those in the pool may be subsequently used for other intermediates. NR: the number of available registers, a constant Here is the code generation algorithm, again, in pseudo-C style. We follow this with a brief discussion of the interesting sections, marked (A), (B), etc.
void evaluate(node* n, int m) { // n is a subtree node // m is the first of a set of NR available registers if (count(n) == 1) { if (leaf(n)) emitload(memloc(n), m); // (A) else { evaluate(leftchild(n), m); // (B) emitop(op(n), m, memloc(rightchild(n)), m); } } else { // count(n) > 1, therefore n has children node n1, n2; int t1, t2; n1= leftchild(n); n2= rightchild(n); t1= count(n1); t2= count(n2); if (min(t1, t2) >= NR) { int t; evaluate(n2, m); t= allocate(); // (C) emitstor(t, m); evaluate(n1, m); emitop(op(n), m, t, m); release(t); } else if (t1 != t2) { if (t1 > t2) // (D) exchange(n1, n2); if (min(t1, t2) > 0) { evaluate(n2, m); evaluate(n1, m+1); if (t1 < t2) emitop(op(n), m+1, m, m);

Chapter 11: AST-based Code Generation, page 306

else emitop(op(n), m, m+1, m); } else { evaluate(leftchild(n), m); emitop(op(n), m, rightchild(n), m); } } else { evaluate(n1, m); // (E) evaluate(n2, m+1); emitop(op(n), m, m+1, m); } } }

Section (A) deals with the case of a leaf node n with label 1. n must be a left child of its parent. No operation has its left operand in memory, so we must load the value of n. We use the next available register m for this purpose. In section (B), the right child of n must be a leaf the count of n cannot be 1 otherwise. We can then evaluate the left child of n with registers m, m+1, ..., NR which are available, and the operation of n can then be done with no additional registers. In section (C), both children of n require more registers than are available. This calls for allocating a temporary memory cell, which turns out to follow a stack discipline. So this could just turn into the usual push/pop. In section (D), one of the labels (t1, t2) is less than NR. This means that at least one registrer is available for the operation op(n). A special case arises if the lesser label is zero; it must be the label of the right child, a leaf node. We can therefore evaluate the left subtree with registers m, m+1, ..., NR available, then operate directly on register m and the right child leaf. If the lesser label is not 0 in section (D), we evaluate that subtree with the lesser label first, then the subtree with the greater label. The former result is left in m and the latter in m+1. Finally, the operation op(n) will be register-register operation, with the result in register m. In section (E), the subtree labels are equal and less than NR. We therefore have an available register for the operation, and it will be register-register.

Chapter 11: AST-based Code Generation, page 307

An example is in order. Consider the following expression tree, which has been labeled with the register counts per treecount. Assume that NR =2, which will exercise the portion of the evaluate algorithm that spills registers into temporaries.

+ n1: 3

- n2: 2

* n3:2

a n4:1

* n5:1

d n6:1

/ n7: 2

b n8:1

c n9:0

e n10:1

+ n11:1

a - b*c + d*e/(f+g) f n12:1 g n13:0

When all the evaluate calls are worked through, we obtain the following code:
load load add div load mpy stor load load mpy sub add e,r1 f,r2 r2,g,r2 r1,r2,r1 d,r2 r2,r1,r1 t1,r1 a,r1 b,r2 r2,c,r2 r1,r2,r1 r1,t1,r1 // r1= e // r2= r2+g

// t1 is a temporary, holds the right tree (n3)

// t1 is recalled here

The most remarkable thing about this algorithm is that it is provably optimal. It will always generate a code sequence with the fewest number of spills of registers to temporary memory locations. Applying this to the Intel CPU architecture cannot be done, for the reasons given above. Even the FPU architecture, with its 8 floating-point registers, does not fit this model.

AST vs. Block Optimization


Optimization of register assignment really calls for working on code improvement over a sequence of statements rather than bothering with single statements. As well see in chapter 15, block optimization starts by breaking down all operations into simple single-operator quads of the form T + , A, B which adds A to B, with the result going to T. The computation involved in an assignment statement AST can clearly be expressed as a sequence of

Chapter 11: AST-based Code Generation, page 308

quads like this one. We might as well as combine a sequence of assignments to form a list of such quads, then consider how variables can optimally be assigned to registers, as well as looking for redundant code, common sub expressions, etc. Knuth [4] studied a large sample of FORTRAN programs, and found that of all the assignment statements in his sample, 68% were a simple replacement of the form A=B, with no arithmetic operators; 13% were of the form A=A op B, with the first operand on the right the same as the replacement variable. The remaining 19% had a more complex structure, most of which involved very few operators. Given such a low percentage of complex expression forms, developing a special optimization algorithm for single expressions is hardly worthwhile.

Testing Optimizations
An optimizing compiler can be difficult to test unless you understand just what optimizations it's performing and how it proposes to do it. If you weren't aware that a compiler might optimize x := x*2 (say) by shifting x one position to the left, rather than carrying out the multiply, then your test cases might overlook that case completely. That implies that testing an optimizing compiler should be done through white box testing, not black box testing. At the very least, the compiler writer should detail the optimization strategies, if not the compiler source code. An important safeguard against overlooking optimizations is to look for complete instruction coverage in the compiler in a test suite. In our case, the compiler code (especially the eval functions) should be monitored through a profile utility during a test run to see which sections of code are not exercised at all, or very lightly. When such a section is discovered, then test cases need to be devised to more fully exercise it. The consequence of leaving some section untested is that, sooner or later, some customer will test it and will rightly be peeved about the bug left in your compiler. The best tests also are full of "assert" calls. If you examine t1.pas, you'll find an assertion following almost every assignment. Its purpose is to see if the result agrees with what's expected. It's silent if in agreement, but will complain with a line number otherwise. Here's some examples:
c1 := 15; c2 := 17-c1; assert(c2= 2, _LINE_); c3 := c1-5; assert(c3= 10, _LINE_); c3 := c1*c2; assert(c3= 30 , _LINE_);

The special variable _LINE_ evaluates to the current Pascal line number, making it easy to locate the source of an assert complaint. The first variable is a Boolean that should return true if the evaluation is correct. Thus in the above, 17-c1 should be 2, c1-5 should be 10 and c1*c2 should be 30. Note that these assert functions are in Pascal, not C. Directory pasprogs is full of Pascal example programs, many of which are designed to test features on our compiler and report any problems found. Theres also a make file which can not only compile, assemble and link these programs, but execute a test. Of course, this must be executed on an Intel platform which supports an appropriate assembler and C/C++ compiler. The Qparser suite has been tested on Linux vs. 8 (Intel) and Windows 2000, with the Microsoft ML assembler and Visual Studio vs. 6.

Chapter 11: AST-based Code Generation, page 309

Summary
A simple bottom-up or top-down parser can generate correct assembly code for an expression or assignment statement as it parses, but the result is not necessarily well optimized code. One symptom of coding during parsing is an excessive number of pushes and pops of registers. Another is failure to identify constant operations that can be folded at compile time. A powerful and general approach to optimization is to first construct an abstract syntax tree. An AST contains operators as internal nodes, with identifiers or constants as leaf nodes. An AST is in effect a reduced derivation tree, in which single production rules are collapsed, along with parenthesized and other useless operators. No code is generated during construction of the AST, however, several reductions can be performed during its construction: constant folding elimination of useless operations elimination of useless assignments folding by algebraic simplification An AST may represent a single expression, an assignment statement, a sequence of statements, a whole function, or the whole program. In our implementation, the tree nodes are identified by the tags used for certain of the production rules those that call out operations. In this way, the structure of the tree can be inferred from the production rules. The tree mechanism is supported through the Ctree class (described in appendix 1), which carries node data through an inheritance mechanism. Ctree facilitates tree manipulation, and which can efficiently support any number of children for any node. Advanced optimizations may convert an AST into a directed graph that represents the operations of a block of code. Its possible to identify common subexpressions, eliminate useless assignments, etc. Once an algorithmic (code-independent) AST or directed graph is constructed, the AST is also useful in finding optimal code sequences that carry out the operations. For the Intel architecture in particular, these are easy to handle: reduction of stack pushes and pops, when registers can be used instead choice of optimizing operations, for example, use of INC or DEC instead ADD/SUB operations. choice of using the FPU or the CPU arithmetic operations in mixed-mode arithmetic expressions, in the search for the most efficient code. Most of these are built into the student Pascal compiler found in directory pascal5. A provably optimal register allocation strategy for an AST exists, for a certain class of multipleregister architectures. Unfortunately, the Intel Pentium architecture is not among them.

References
[1] Meyers, W. J., Optimization of Computer Code, unpublished memo, G.E. Research Center, Schenectady, NY, 12 pages, 1964 [2] Nakata, Ikuo, On Compiling Algorithms for Arithmetic Expressions, CACM 10,(8), 492-494, 1967 [3] Sethi, R. J., and J. D. Ullman, The Generation of Optimal Code for Arithmetic Expressions, JACM 17,(4), 715-728, 1970. [4] Knuth, D. E. An Empirical Study of FORTRAN Programs, Software Practice and Experience, 1,

Chapter 11: AST-based Code Generation, page 310

105-134, 1971.

Chapter 11: AST-based Code Generation, page 311

Chapter 12: Type Declarations and Type Checking


W. A. Barrett, San Jose State University nch12.doc, vs 3.1

Types of Literals and Identifiers


A type is some attribute of a literal value or a user identifier. By attaching suitable attributes to identifiers and literals, the use of one of these within the context of various operators can be given a precise meaning. For example, the literal value
0.56E-3

has the type floating-point in Pascal and C. Most C compilers distinguish double precision and single precision floating-point numbers. This one would be considered a double precision number. The type of a literal number, string or character can usually be inferred from its form or value. In this case, the decimal point "." stamps this as a floating-point number. The E further distinguishes it a doubleprecision number (in C). The type of a user identifier can't be inferred from the identifier string by itself. Its type is fixed by a declaration. For example, in Pascal, the declaration
VAR k: integer;

stamps the user identifier as having type integer as well as being a variable. Pascal and C requires that every user identifier be declared before any reference appears in the source program. We can therefore say that a type is associated with each user identifier. The type information will become part of the identifier's attributes when the identifier is entered in the compiler's symbol table. A type can take many different forms, including simple types (integer, real, double, etc.), arrays, records, pointers, functions, and classes. Pascal also supports a subrange type, and a powerset type. A file may be considered a type. We will implement a string type, an extension to Pascal.

Expression Types
A type is also associated with expressions, since these can be considered to return some value at runtime, and that value will be associated with a type at compile time. We can therefore reasonably talk about the type of a variable or the type of an expression from within the environment of the supporting language. Types are very important in understanding just how to interpret the operators of a language. Consider the following expression:
a + b

This is meaningless in most modern languages without knowing the types of the identifiers a and b. The operator + is in fact overloaded in most languages. It could call for an integer addition in any of several different precisions (8 bit, 16 bit, 32 bit, decimal arithmetic, etc.). It could call for a floatingpoint addition, again in any of several different precisions. It might also call for a string concatenation, a usage supported by Pascal, Ada, Modula and java. Also, since java and C++ support operator overloading, the programmer can assign a new usage to any of the C++ operators, including +, so that the + operator might be interpreted as a complex number addition, a vector addition, a matrix addition,

Chapter 12: Type Declarations and Type Checking, page 312

etc. If the types of a and b are different, a compiler clearly has the additional burden of deciding on a common result type in which the arithmetic can be performed. The reason why arithmetic must be performed in a common type is that's a common restriction built into the design of most microprocessors. There are processor instructions for adding two integers, or for adding two floats, but no single instruction for adding an integer to a float. There's always an instruction or function to convert a single value from one type to another, so a combination of this with a binary instruction is required at the processor level. It's up to the compiler to first work out a suitable conversion, then set up the operation in a common type. The type of the expression a+b and the exact form of the operation is determined by the operator and the types of its operands. If both a and b are type integer, then the type of the expression is considered to be integer. If either a or b is type real, then the expression type is considered to be real. The operator types of a binary operator need not be the same. For example, in Pascal, the expression
n in pset

requires that pset be a powerset type, and n must be a set member compatible with the types carried by pset. The expression as a whole returns a boolean (true or false). The idea is that the in operator tests whether n is a member of this particular set or not, so three types are involved, the member type, the powerset type and the result type.

Type Casting
Let's look at a+b in more depth. Assume that the compiler knows that variable a has the type single-precision float, and variable b has the type 32-bit integer, then it is expected to first convert b to floating-point. This is in order to satisfy the microprocessor's constraints, which demand that both operands must be the same type. The conversion of a variable from one type to another is called a type conversion or type cast. A conversion is expected to preserve the mathematical value of the variable to the extent possible. It happens that the domain of floating-point numbers contains the domain of integers (but not vice versa), so it's reasonable for the compiler to assume that the programmer expects b to be converted to a float, through an implicit type cast, even though an explicit conversion isn't given. That is, this type cast is implied by the types of the operands of this expression and the operation. In the Pentium environment, this addition is optimally done like this:
fild fld fadd a b

In order to generate these instructions, the compiler needs to recognize that one of the operands is a float, and therefore that both operands need to be loaded in the FPU as floats. fild expects its memory address to hold an integer, while fld expects its memory address to hold a float. Another example of a type cast requiring some runtime conversion is from char to integer. Although in Pascal, a char is normally associated with an ASCII character, it has the internal form of an 8-bit unsigned number. In the Pentium, if the character is loaded into register al, it can be expanded into a 32-bit unsigned integer in EAX like this:
movzx EAX,AL

A type cast that requires no runtime conversion at all can be found in the C language. Changing a signed 32-bit integer to unsigned requires no runtime operation at all, since the two internal forms are equivalent. Unfortunately, this particular cast causes a negative integer to appear to be positive. For

Chapter 12: Type Declarations and Type Checking, page 313

example, the signed integer -1 is represented as 0xFFFFFFFF. As an unsigned integer, it has a large decimal value: 4294967295. The C programmer must be wary of such casts unless he/she is certain that the integer being cast falls within the range 0.. 0x7FFFFFFF; these are safely cast from signed to unsigned. A similar problem arises with signed vs. unsigned chars, and signed vs. unsigned long ints: negative signed numbers do not cast safely to their unsigned equivalents, nor the converse.

Literal Types
A literal is a source program form that represents a particular numeric, string or set value. Each of the following are literals:
15 22.7 'a string' [3, 5, 7] [] NIL ; ; ; ; ; ; an integer literal a floating-point literal a string literal a powerset literal a powerset with no elements a universal "null" pointer value

The type of a literal value is not always clear from its appearance in the source program. For example, the literal 1 in ANSI C can be interpreted as any one of several different types, including char, unsigned char, short, unsigned short, int, unsigned int, and more. (In simple Pascal, it can only be type integer). The string '3' in Pascal can be interpreted as a string that happens to contain one character, or a character type. Such ambiguities are difficult to handle in a compiler without some additional rules that govern combinations of operands with operators. To see how the operator can influence the choice of type of a literal, consider the expression
'3'+'4'

This does not evaluate to '7' in Pascal, as one might suppose. Instead, the result is the string '34'; the + operator between two string-like objects means concatenation, not addition. The rules for the + operator in Pascal require that it either operate on a pair of numbers or on a pair of strings. A character is not considered to be a number, hence in this context, both '3' and '4' must be considered strings. For this reason, the expression
'3' + 4

is considered illegal as an attempt to add two dissimilar types. If it's necessary to treat the ASCII character '3' in its numeric form, Pascal provides a built-in cast function ord, thus ord('3') has the type integer and has a value 51 equal to the ASCII code for character '3'. We can then legally write
ord('3') + 4

which yields the value 55. If you need the decimal equivalent of an ASCII digit, you need to subtract ord('0') from it. Thus
ord('3') - ord('0') + 4

evaluates to 7. Operator chr is provided in Pascal to convert an ordinal value (an integer) into a character type. For example,
chr(51) + '4'

yields the string '34'.

Type-tagged Architectures
Some programming systems, such as Smalltalk and Lisp, carry every number, string and variable as a type-tagged object at runtime. The tag is a few bits carried with each value at runtime. For example, a few types could be supported with 2 extra bits, using the following plan: tag type of value

Chapter 12: Type Declarations and Type Checking, page 314

32-bit integer 64-bit floating-point 32-bit pointer extended type (more bits needed) In such an architecture, a 32-bit integer requires 34 bits to represent it. The tag is understood to be not part of the number, but rather a way of marking the object as type 32-bit integer. By tagging every simple type, a Lisp interpreter can work out all necessary type conversions and arithmetic at runtime. For example, it can be asked to add a 32-bit integer to a 64-bit float. By checking the tags first, the processor would decide that the arithmetic must be carried out in 64-bit floating form. The integer would be converted to a float, the operator carried in floating-point, and the result would be a tagged 64bit float. The runtime environment is therefore expected to work out all necessary type conversions. Such tagged runtime environments have not found their way into general industrial and commercial practice, despite certain advantages to this approach. It's clear that the tag requires additional memory space, and resolving the tags at runtime tends to reduce performance. Instead of a 32-bit integer requiring exactly 4 bytes, it would require 34 bits. The only reasonable way of supporting such an integer on an Intel or Motorola platform would be as a 5 byte number, using the extra byte as the tag. Thus the memory required for integers is 25% larger with tags than without. Alternatively, the largest 4byte unsigned number would have to be 2^30-1, not 2^32-1, and that would cause some programs to break. Also, the Intel/Motorola micros provide no runtime tag checking as part of the instruction operations. So a simple addition would first require several instructions to test the tags, then sort out which of several possible operation sequences to perform. The resulting type also needs to have its tag set as well as the value. Obviously, all of this would seriously damage performance. Nevertheless, Daniel Hillis, in his MIT PhD thesis, has shown that a tag-checking architecture can be designed with no inherent performance penalty. The general idea of such a machine is that several operations are launched in parallel while the tags are being tested. The outcome of the tag checking can be done before the operations are complete, permitting a selection of the result. His ideas culminated in the development of the so-called Lisp Machine workstation, which was manufactured and sold by two different companies for several years. Eventually, both companies failed, probably through price competition with conventional workstations, and the development of Lisp software systems that ran reasonably well on conventional hardware. Neither company was successful in developing low-price microprocessor platforms for their machines, in competition with conventional micro development by Intel, Motorola and others. Given this situation, the burden of resolving types has become a compiler function, not something that can be left to the runtime environment. A great blessing of a strongly typed language is its ability to detect and warn about illegal or suspicious operations during the development of the program, rather than leave such issues to runtime testing. The use of types strongly enforced through a sophisticated compiler framework has made programming large projects far more reliable than in the past.

00 01 10 11

Division: the Type is Very Important


Division requires careful attention to the types of the operands, and to the language rules. An integer division should yield an integer quotient, discarding any remainder or fractional part. A floating-point

Chapter 12: Type Declarations and Type Checking, page 315

division is expected to yield a number with a quotient and an approximation to the fractional part. Thus in C, the assignment
r = 7/3;

yields the value 2.00 since the division is performed in integer mode. The two operand types are integer, so the compiler assumes an integer division is wanted. The integer quotient, 2, is then converted to float and assigned to r. In Pascal, there are two different division operators, / and DIV. The / operator forces the division to proceed in floating-point mode, regardless of the types of the operands. The DIV operator is only legal if both operands are type integer, and performs the division in integer mode. Thus, in Pascal, the following assignments cause r1 to receive 2.00, and r2 to receive 2.3333:
r1 := 7 DIV 3; r2 := 7/3;

Types are Important in Any Case


Although the difference between an integer and floating-point operation is most obvious for division, it can matter for any of the arithmetic operations, if the result of some operation exceeds the range of the type. If the integer type is supported by 16-bit twos-complement arithmetic, then any multiplication or addition whose result exceeds 32767 will yield an overflow error. An 80x86 can be configured to generate an operating-system interrupt on any overflow, so that a totally erroneous result will not be permitted to re-enter an algorithm with no warning. The same values expressed in long integers (32-bit twos-complement) would generate the correct result. However, an overflow occurs in long ints if the result exceeds 2^31-1, or 2147483647. Similar limits exist with floating-point numbers, through the limits on the maximum and minimum exponent values. Also, floating-point numbers are subject to limits on precision. While certain numbers and fractions can be carried precisely in the IEEE standard floating-point form, most fractions cannot be precisely expressed. For example, 1/32 can be precisely carried (32 is a power of two), but 1/3 cannot be. Also, integers up to 8388607 can be carried precisely in 32-bit floating-point form, but integers larger than that can only be carried through 64-bit floats. (The Intel FPU actually supports 80-bit floats internally, with a enough precision to carry 64-bit integers accurately). Given the limits on range and approximation accuracy, the types of the operands are an important consideration for the careful programmer. The programmer should also clearly understand the implications of a critical operation, and decide whether loss of range or precision may adversely affect the algorithm. For example, calculations of indices for arrays must pay careful attention to the possible outcome of the value range, in order to ensure that a computed index lies within range of the array. A matrix calculation with a nearly singular matrix may turn out to be totally wrong if sufficient precision in the number formats isn't used. A larger range and higher precision requires more memory space for the variables, and somewhat more time for the operations. Just choosing the largest possible integer and precision out of ignorance or laziness therefore carries a price in memory and time performance. One of the implications of number size limits for the compiler writer arises in conversion of compiler constants and constant folding with large numbers. For example, its reasonable for a compiler to have to deal with constant arithmetic of this sort:
const int small= 50000000000000 49999999999999;

This is clearly just 1, but each of the large numbers exceeds a 32-bit integer size. A reasonable programmer would not write such a monstrosity, but it could arise through the arithmetic combination of other named constants.

Chapter 12: Type Declarations and Type Checking, page 316

Types, Variables and References


In general, a language will provide a way to declare both types and variables. A type declaration provides a way of assigning a user identifier (name) to a type. This is an abstract idea. A type declaration does not imply any allocation of memory space at runtime, nor does it require that any use of that identifier must be made later in the source program. In Pascal, a type declaration must follow the keyword type and looks like this:
TYPE k= array [15..25] of real;

No assembler code is generated by a type declaration. That information is kept internally in the compiler, and is only used to guide subsequent variable declarations and variable usage. A variable declaration assigns a name to some memory space. It also associates the variable name with a type. Here's what a variable declaration looks like:
VAR x, y: k;

Since a variable declaration allocates space, an assembler memory allocation pseudo-instruction must be generated by the compiler. For these two variables, the following declares the names x and y at an assembly level, and provides for a space allocation of 44 bytes each (11 instances of a 4-byte real):
.DATA x_005 y_006 dd dd 11 dup (0) 11 dup (0)

Recall that dd allocates quad bytes of storage. The 11 dup(0) allocates 44 bytes, each initially containing a 0. The name x has been translated into that string followed by a unique tag, _005. This is done because Pascal supports multiple scope levels, which means that the same name can be used to represent different variables in different scopes. Assembler supports only one scope. By appending a tag, we keep the Pascal user name, but also make sure that repeated use of the same Pascal name won't cause an assembler complaint. Notice that a type declaration uses the token =, while a var declaration uses token :. Also, you can declare several variables associated with the same type, but giving more than one name to a type doesn't make sense. Variables x and y therefore have separate memory allocations, but are related in the sense that they are associated with the same type through the type name k. One can think of a type as a kind of cookie cutter as used in a kitchen, and a var as a cookie. A wellequipped kitchen will have an assortment of cookie cutters of different shapes available to the chef. A cookie cutter can't be eaten. Its only purpose is as a fast and mechanical way of cutting cookies from dough. There's no reason to use all of the cookie cutters on any given day or for any particular meal. And you only need one cookie cutter for any particular shape. Following this analogy, the variables are the cookies. You can make any number of cookies from the same cookie cutter, and cookies are directly useful (they can be eaten). The cookies also consume raw material (the dough), just as variables consume RAM memory at runtime. Variables and types should be referenced later in the program. A reference is some appearance of the declared identifier in an assignment statement, procedure call, or whatever. Chapter 12: Type Declarations and Type Checking, page 317

If a type isn't referenced anywhere, most compilers really don't care, since unused types consume no resources at run-time. If a variable isn't referenced anywhere, a warning should be generated by the compiler or linker. An unused variable has demanded some memory resource that isn't used in the program. The variable can therefore be deleted to improve the program's performance and space allocation. Weve examined this mechanism of declarations, references, errors, warnings, etc. in chapter 6 in conjunction with the use of symbol tables.

Historical Background Types in FORTRAN


The need for types in modern languages was recognized in the very earliest compilers. FORTRAN, first reported by Backus in 1957 [1], supports types through two mechanisms. All identifiers starting with one of the letters IJKLMN are considered type integer. All other identifiers are considered type floating point. This convention made it unnecessary to declare variables prior to their use. One can simply start using a variable somewhere in the program, and its type is immediately apparent. Only one integer and one float were supported by the early machines, so further distinctions were unnecessary. Declarations are required in FORTRAN for array variables, since an array dimension must be fixed prior to any use of an array variable, for example,
DIMENSION AX(50)

which stated that variable AX (assumed to be type float because of its first letter) is an array of 50 elements, indexed 1..50. A dimension statement also provides an opportunity for the compiler to allocate memory space for the array, in this case, enough contiguous memory space to hold 50 floats. Later versions of FORTRAN supported other data types, such as characters, strings and different precisions of numbers. These required special declarations in order to not destroy the existing first-letter convention. All FORTRAN versions supported the conventions of earlier versions, and this tended over time to create many artificial and clumsy statement forms. Functions in FORTRAN are just declared somewhere with a list of their parameter names. If a parameter has a dimension, it must be stated in a line following the opening of the function. Function calls cannot easily be checked against the function definition, since they may be in different files. Later versions of FORTRAN provided more safety in this regard, causing compiler or linker complaints if the numbers and types of parameters in a call disagree with the function definition. The first FORTRAN compilers provided no such guarantees. It was up to the programmer to not make such mistakes.

Types in BASIC
Another popular language with a long history and a primitive type system is Basic [2]. This has enjoyed many versions, is popular among amateur programmers and is still used in certain commercial software products. It has many variations, so many that in general, a Basic program written for one environment cannot be used in another. The most primitive Basic implementations (found in early microprocessor applications) used single letters as variables. All values were carried as floating-point numbers by default. A float can also carry an integer, but the the idea of the variable somehow only representing an integer is lost. For example, one might write the following in Basic:
let I=55

Variable I will hold a floating-point number, 55.0, at runtime, despite the integer-like appearance of

Chapter 12: Type Declarations and Type Checking, page 318

55. This works very transparently for addition, subtraction and multiplication, since the result of these operations will continue to have no fractional part, provided that no precision overflow has occurred. The Basic number printers are also designed to notice whether a fractional part exists or not. If not, then the number is printed as an integer. Thus the line
print 5*6-3

will very nicely display


27

even though all the operations are performed in floating-point. Division raises some conversion issues. The line
print 1/3

will display
0.333333

even though the dividend and the divisor are integers. This operation therefore behaves like the same one in Pascal. If the integer part of a division is required in Basic, a special function must be used to extract the integer part of the division, i.e. INT(A/B). The division A/B itself is still carried out in floating-point and will have a fractional part. The INT function merely strips off the fractional part, returning a floating-point number that (most of the time) is equivalent to the desired integer division. String variables are supported in Basic, by using the name preceded by a dollar sign ($). Thus $S is understood as a string, while S is considered a numeric variable. Strings and numbers are more-or-less interchangeable, since a string can carry a representation of a number and (sometimes) vice-versa. As in FORTRAN, Basic requires arrays to be declared with a dimension, i.e.
dim k(100)

Functions in early Basic versions were merely "calls" on specific line numbers, like this:
call 175

Any parameters required had to be part of the global set of variables provided by the compiler. Any line number could be "called", and the corresponding return (an explicit RET statement) had to eventually appear at runtime. More recent versions support named functions with parameters that can be passed by value or reference. These conventions (all parameters global; no scoped function code, no scoped function parameters or variables) made primitive Basic programs extremely difficult to maintain or adapt. Modern Basic, as used extensively in Microsoft products, has been adapted to modern type practices. Long names can be used; functions resemble those found in C++, and a function can be written that behaves as a member function of some object.

Types in Algol 60
One of the first fully typed languages was Algol 60 [3]. Early compilers for Algol 60 is reported in [4] and [5]. Algol (short for Algorithmic Language) provided a complete system for declaring all variable types, very similar to that found in Pascal. It also provided sophisticated mechanisms for calling functions and passing parameters. Function declarations could be scoped. Multi-dimensional arrays were supported. String literals appeared, along with a friendly set of IO functions. Algol 60 was also the first language whose syntax was defined by production rules, also called BNF, or Backus-Naur Format, named for the two inventors of the production rule system of describing languages.

Types in Perl
The Perl programming language [6] evolved from the extensive use of Unix shell scripts.

Chapter 12: Type Declarations and Type Checking, page 319

Variables can just be introduced in statements with no prior declaration, though most programmers prefer to declare them before any use. Simple names (with no prefix) are used to designate functions. A variable carrying a number or string must be preceded with a $ character. Two compound variable types are provided, an array and a hash. An array variable is introduced by preceding its name with the character @. The elements of an array can be any other type, including mixed types. Indexing can be used to both set and access array elements. A hash variable is introduced with the character %. A hash variable is effectively a symbol table, where the symbols are simple strings, and each associated element can be any Perl type. Functions carry no prefix symbol, unless a reference to a function is required, in which case the prefix character \ is used as a form of "address of". Perl is implemented through a rich supporting structure, such that each variable is a self-describing object. Arrays require no prior dimensions and can be grown at runtime to any required size simply by providing a large enough index. Similarly, hash variables can be extended to support any number of elements at runtime. No object deletions are required. Object reference counts and a garbage collector relieve the programmer of the difficult (and dangerous) task of keeping track of allocated objects. Strings and numbers are essentially interchangeable, since Perl keeps track of their types at runtime. This feature sometimes makes Perl programming a bit nasty, since the object 0 can refer to the number zero, to a null pointer reference, or an empty string, depending on its context. Numeric arithmetic is in floating-point, so the function int is provided to strip off a fractional part when the integer part of a division is needed.

Records and Pointers


Records and pointer types first appeared in Pascal in 1971 [7] and in C in 1978 [8], nearly as parallel developments. Jensen and Wirth's Pascal Report [9] defines type declarations along with many examples of their value and power. The C developers introduced the typedef as a way of declaring a type. Both languages support a heap, providing storage for runtime allocation of objects. With runtime allocation, a pointer type is needed as a way of referring to a pointer, which of course becomes a memory address at runtime. In Pascal and C, one normally declares some object type, for example a record type, then a pointer to that type, like this:
type rt = record i,j: integer; r: real; end; var p= ^rt;

Later, an object of the pointer's type can be allocated through the statement
new(p);

where p is the name of a pointer to some type. The necessary space for the record is allocated from a runtime heap, and pointer p is set to the address of that space. The emphasis in Pascal is on safety, which requires strong type-checking. Pointers can only be used within certain contexts in a way that mimimizes surprises and confusion in a source program. In C, pointers can carry the addresses of objects allocated from the heap and also other objects. Pointers and arrays are more-or-less interchangeable in the sense that an array element can be accessed by indexing either a pointer or the array name. Here's an example of these two approaches:
int a[22]; int *p; // an array // pointer to an int

Chapter 12: Type Declarations and Type Checking, page 320

p= a+3; p[2]= 15;

// p points to the 4th element of the array // the 6th element of the array is set to 15

The emphasis in C has always been versatility, which sometimes demands the loss of safety. The very generous use of pointers provided in C helps software developers write the low-level code needed in hardware drivers and operating systems, but it also introduces an element of danger through the possibility of invalid memory access.

Objects
Object-oriented programming became generally available through the Smalltalk language environment [5]. An object may contain data and functions that are bound to the data. Objects are created from a template called a class, which declares the data types and functions (or the complete function definition, as in C++ and java). Associated with these are the powerful mechanisms of constructors, inheritance and name overloading. Although object-oriented programming is a very powerful and useful addition to a programming language, classes and objects are very similar to records or structs from the compiler's point of view. We won't discuss the compilation issues of objects in this book.

Abstract Type Declarations


Let's now discuss types and type declarations in a more precise way. A type can be defined as follows. Notice that some of these are compound types, using other types in their collection. A type expression (or type for short) is any one of the following: a constant or literal type a basic type a type name a subrange type an enumerated type an array type, consisting of a sequential collection of some other type, a record type, consisting of a sequential collection of other mixed types, a pointer type, which is a pointer to some other type, a function type, whose parameters are a mixture of other types, a class type We discuss each of these next.

The constant type


This is a compile-time constant of some sort. The constant may itself be structured as some compound type, all of whose members are constant. Or, the constant may appear as a literal somewhere in a program, for example, the literal "22" in
x := 22;

A constant may be given a name in a declaration, for example,


const degree= 22; (Pascal) (C, C++)

or
const int degree= 22;

The name degree is therefore considered to be replaceable by the constant 22. Note that in C, a type

Chapter 12: Type Declarations and Type Checking, page 321

is assigned to the constant by the declaration, whereas this isn't the case in Pascal. A Pascal constant nevertheless has a type, which can be inferred from the form of the constant, i.e. integer, real, string. Named constants are an important tool in structured programming. A number by itself in the context of some algorithm is usually meaningless. Every literal number should be related to something else somewhere in the program, or be computable through some formula involving other constants. Often, the same numeric constant is needed in several different places. These imply a relationship between different portions of a program that is difficult to manage unless the constant is given a name. For these reasons, a good rule in programming is to never write a literal number into structured code. The only reasonable exception might be the numbers 0 and 1, which usually speak for themselves and have no relationship with anything else. Constants that just appear in an expression must be assigned a type by the compiler in any case. This can usually be done by inspecting the form of the literal. For example, in C, if the literal is in the form of a decimal number with no decimal point or trailing exponent part, for example
22

then it can be assigned one of the integer types char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long. Which of these is chosen depends on the magnitude of the number. If the number is less than 128, it can be assigned the type char. If less than 256, it can be assigned the type unsigned char. These assignments should not cause a type error in an operation because C also supports implicit type casting. Thus the char type 22 will automatically be cast to an int or a long int if that's what's needed in some operation. C also defines certain suffixes to force the type of the literal. If the suffix L is used, then the literal's type becomes long. If suffix U is used, it becomes an unsigned type. Floating point numbers can be recognized by the appearance of a decimal point or an exponent part. Thus each of the following are considered floating-point types in C:
3.6 4E-6 3F-5

To further distinguish a 32-bit float from a 64-bit double number, ANSI C provides these rules: The default is double. So 3.6 is considered type double. The use of "F" as a suffix or an exponent tag marks the type as float, i.e. 3.6F. "E" marks the number as a double.

The Basic Type


Also called a simple type, this often includes the boolean type, an integer type, a floating-point type, a character type and a string type. In Pascal, we have boolean, integer, real, char and string. In C, we have char, short, int, long int, float and double. There are also signed and unsigned attributes in C. There is no string type as such in C; a string is considered an array of char. Also, Pascal does not have an explicit character type; a string consisting of a single character may be considered either a character or a string, depending on its context. These type distinctions reflect the typical binary number storage schemes used in processors. Most modern machines carry numbers in multiples of 8-bit bytes. One byte can therefore represent 28 = 256 possible combinations of bits. An unsigned char (in C) therefore has a range of 0 to 255 in magnitude. A signed char is typically defined as a set of bit combinations that support the magnitudes -128 to +127, and all values in between. (The signed char -127 is 0xFF, -128 is 0x80, and +127 is 0x7F). Chapter 12: Type Declarations and Type Checking, page 322

The precise internal forms of each of the number types is machine dependent in general. One must refer to the manufacturer's specifications for the processor (and its companion compilers) for details. ANSI and IEEE standards exist for certain number types, and these are increasingly being followed by modern chip manufacturers. The IEEE standard for floating-point numbers is followed by most modern processors, for example, although some also support an alternative form that provides higher performance or compatibility with older versions of binary code. The machine dependence of certain types has carried over into a certain vagueness in language specifications. For example, the C int type may be a sign-magnitude 16-bit number on one processor, or a twos-complement 32-bit number on another. The C Reference Manual [6] contains a detailed discussion of the number forms supported by ANSI C.

Big Endian vs. Little Endian


A low-level consideration with basic types is the order in which they most efficiently appear in memory. This ordering is built into the processor instruction set, and can be changed only at considerable expense in machine cycles and instructions. This ordering is dictated by the instructions that transfer a register value (perhaps 32 bits) to memory, or vice versa. Most Intel processors (the 80x86 line) are little-endian machines: the least significant byte of a number will appear at a lower memory address than the most significant byte. For example, the hex integer 0x01020304 will appear in memory like this on an Intel platform: 150 151 152 153 Mem addr 04 03 02 01 byte The Motorola 68x00 processors are big-endian processors: the least significant byte of a number appears at a higher memory address than the most significant byte. Here's how the integer 0x01020304 will appear in memory on a Motorola platform: 150 151 152 153 Mem addr 01 02 03 04 byte The Motorola PowerPoint processor was designed to support either little or big endian formats, depending on a mode setting. The mode can be switched rapidly at runtime, making it possible for the processor to support either format efficiently. It was designed to efficiently emulate an Intel processor for DOS and Windows 95/98 applications, and also to support Apple Macintosh software applications, written for a Motorola 68000. Whether a machine is big-endian or little-endian matters in a compiler at the point at which the compiler must generate a sequence of bytes to represent a multi-byte number. It's also an important consideration to a programmer when writing a binary file containing numbers that must be copied as a file from one architecture to another. The task of transferring binary information containing multi-byte numbers over a network to some other processor requires the services of a special program; that service is called marshaling. To summarize, the basic types supported in Jensen-Wirth Pascal are char integer boolean Chapter 12: Type Declarations and Type Checking, page 323

real string A string literal is a sequence of characters embedded in a pair of single quote-marks. A string literal consisting of a single character is also considered to be type char. A literal boolean is either of the reserved words TRUE or FALSE. The basic types of ANSI C are variations on these types: char int short long float double string The char type and the three integer types can also carry the attribute unsigned. A character literal is a single character embedded in a pair of single-quote marks. A string literal is a sequence of characters embedded in a pair of double quote-marks. A boolean type is often provided in C through a typedef and defines for the reserved words TRUE and FALSE, like this:
#define TRUE 1 #define FALSE 0 typedef int Boolean;

The Type Name


This is a name that refers to some type. The named type may of course be any type, including the compound forms.

The subrange type


A subrange is in general a subset of the range of some other type. In Pascal, a subrange is considered to be a contiguous set of integers with a lower and an upper bound, declared like this:
TYPE S = -35..75;

This states that the type S is a subrange containing all the natural numbers between -35 and +75, inclusive. A number of type subrange is generally interchangeable with an ordinary integer. It is also usually carried in an integer binary form at runtime. Some Pascal compilers provide runtime type-checking, in which case any attempt to set a subrange type with a value outside its range produces a runtime trap, closing the program with an error message. Using the subrange type, one might declare an unsigned 16-bit integer as follows:
TYPE uint = 0..65535;

The subrange is also useful in specifying the dimension of an array. It's only useful with integers, and has no counterpart with floating-point numbers. A Pascal compiler could be designed such that any variable declared as a subrange type would be prevented at runtime from carrying a value outside its defined range. For example, if variable V is declared like this:
var V: S;

then the assignment


V := -36;

Chapter 12: Type Declarations and Type Checking, page 324

should be trapped at runtime as an error. (In fact, the compiler could detect and complain about this assignment). By trapping all out-of-range subrange variables, their use as an array index will also guarantee that no array bounds violation can occur. Just how a subrange is to be carried in memory as a variable is an optimization issue. Obviously, enough bits must be allocated to support the specified integer range, but there is the added consideration of performance. A subrange can be the operand of any arithmetic operator, so the microprocessor's constraints should be considered. A reasonable compromise is to choose an 8-bit, 16-bit or 32-bit integer type that contains the subrange. Arithmetic can then be efficiently performed, and the value can be carried on an even byte boundary in memory, making access efficient.

The enumerated type


An enumerated type is a set of names, each associated with a unique cardinal number. The enumerated type is a form of subrange containing the cardinal numbers associated with the enumerated type names. For example, the Pascal declaration
TYPE E = (alpha, beta, gamma)

declares the type E to be a subrange consisting of the unique values {alpha, beta, gamma}. The cardinal numbers associated with these names should be considered arbitrary, but in most Pascal compilers, they are assigned 0, 1, 2, etc. from left to right. The enumerated type names are considered to be ordered, by the ordering in their declaration. Thus we can say that alpha precedes beta, and beta precedes gamma. This clearly requires that they be assigned to successively increasing integers. In ANSI C one declares an enumerated type as follows:
typedef enum {alpha, beta, gamma} E;

The programmer may assign explicit values to the names, as follows:


typedef enum {alpha=4, beta=3, gamma} E;

Enumerated type names in C are considered ordered by their integer assignments. The compiler is expected to make sure that the assignments are unique. Names that are not explicitly assigned in the declaration are assigned by the compiler, in sequential order. It's important to realize that each of the names appearing in an enumerated type declaration represents a compile-time constant. A later reference to the name will be treated as a reference to that constant value, within certain type constraints. Therefore each of the names in the enumerated type must be also be declared as a distinct constant. This also implies that the same name can't be used in two different enumerated type declarations; for example, the following declarations are illegal:
TYPE E= (alpha, beta, gamma); F= (beta, gamma, delta);

This is illegal because beta and gamma in the F type were previously declared in the E type. An enumerated variable can be carried at runtime in different ways. Here, the main consideration is the minimum and maximum values represented by the enumerated type. Thus E in the above type declaration is expected to support the integer range 0..2, which can fit in just 2 bits. However, performance issues suggest that it be carried in one or two bytes. It's difficult to assign an illegal value to an enumerated variable in Pascal due to the compiler's typechecking and a severely limited number of operations on such variables. No arithmetic on an enumerated type variable is permitted. An enumerated variable can only be assigned-to from an enumerated variable of the same type, or be incremented or decremented. However, function ord(e)

Chapter 12: Type Declarations and Type Checking, page 325

returns the integer value of an enumerated type e, providing a kind of one-way escape hatch from these constraints. Runtime checking is clearly needed to strictly enforce the range checking on enumerated types, as well as scalar types. (No runtime checking is provided in the student Pascal compiler, but can be switched on in most commercial compilers). An array can be declared whose dimension is an enumerated type, for example:
type e= (alpha, beta, gamma, delta); var xa: array[e] of real;

The effective dimension is then the range of the least to the largest enumerated type. In this case, the dimension is 4.

The array type


The array type is a linear, ordered, continuous collection of a single other type, when instantiated as a variable in memory. For example, the Pascal declaration
var A: array[1..10] of integer;

associates array(1..10, integer) with the variable name A. At runtime, a block of memory will be allocated to carry 10 integers, most likely 40 bytes for 32-bit integers. These can be accessed through indexing within the range specified. In Pascal, both the lower and the upper dimensions of an array are normally specified. The dimension may also be an enumerated type or a subrange type. For variable A, an index less than 1 or greater than 10 would be illegal. Either or both of the dimension bounds can be negative. The compiler only requires that the lower bound be less than or equal to the upper bound. An element of array A is accessed with an index that's compatible with the type of the dimension. For example, the following assignment sets the element at index i+1 to the value of the element at index i:
A[i+1] := A[i];

The type of variable i must be integer. Some Pascal compilers can generate runtime bounds-checking code that prevents any indexing of an array beyond its declared dimensions. Using such a compiler, a Pascal program may contain lots of bugs, but will immediately report any array indexing error at runtime, rather than just corrupt memory and later crash.

The record type


A record in Pascal is similar to a struct in C. It is a linear, contiguous collection of unrelated types. Note that each of the members of the collection may be some other type, including a record. For example, the Pascal declaration
type row = record addr: integer; lexeme: array [-15..15] of char; end;

declares row as containing: an integer starting at the relative position 0 in the record block, followed by an array of 31 chars, starting at the relative address 4 (assuming that an integer is 32 bits) A variable of type row will clearly require 4 + 31= 35 bytes in memory.

Chapter 12: Type Declarations and Type Checking, page 326

The variable declaration


var table: array [50..101] of row;

declares table to be an array of row records. There are 52 elements in this array, and each requires 35 bytes, so this variable will require 52 x 35 = 1820 bytes of memory. An element in a record is accessed in Pascal through a compound name using a period (.) as a separator, like this:
table[52].lexeme[0]

This returns a character in the lexeme array contained in the row record, which in turn is contained in the array table. This access notation is the same in Pascal as in C.

The pointer type


In Pascal, the declaration
var p: ^row

declares variable p to have the type pointer to row. A pointer declaration can also appear in a type declaration, like this:
type pp= ^p;

which declares pp to be of type pointer to p, or pointer to pointer to row. At runtime, a pointer variable such as p is just a memory address. Most modern machine addresses are carried in 32 bits. Assigning to a pointer is not the same thing as assigning to the pointer's reference value. In Pascal, the statement
p := p1;

sets the address carried by pointer p to the address carried by p1. Both p and p1 must be pointers of the same type, i.e. they must point to the same type. The assignment doesn't change what's pointed to, it just makes p and p1 point to the same memory location. A pointer is dereferenced with the postfix operator ^. Thus
p^ := p1^;

causes the referenced contents pointed to by pointer p1 to be copied to the block of memory locations pointed to by p. A pointer can be empty, or nil. This is a special address (usually 0) that means that the pointer does not refer to any valid memory location. In Pascal, the reserved word nil stands for a pointer of any type that is empty. Thus
p := nil;

sets the value of pointer p to 0. The "0" memory address may contain something interesting. In protected mode, a "0" memory address is considered an illegal memory address, and the operating system will generate a runtime trap, usually reported as a segmentation violation.

The function type


A function maps elements of one set of types, its domain, to another set, its range. For example, the Pascal declaration
function f(a, b: char; c: real): ^ integer;

declares a function that takes three parameters, a, b, c, such that a and b are type char, while c is type real. It returns a pointer to an integer. Pascal functions have a long story, which is told in chapter 13.

Chapter 12: Type Declarations and Type Checking, page 327

The class type


A class is a collection of data types and function types. A particular class can therefore be expressed as collection of the above types, each named, similar to a record, except that it will usually contain both functions and data. If a compiler has a representation for each of the other types, the extension to a class description is fairly straight-forward. However, there are many special features of classes that require compiler design work, for example, the access restrictions, inheritance, binding of an object to a member function, use of friend (in C++), virtual functions, and more. Some commercial Pascal compilers support classes as an extension of the record type.

Forwarded Type References


The general rule in Pascal is that every variable must be declared before any reference appears. This works fine in all cases except some that involve recursive function calls and certain uses of pointers in circular-reference structures. For example, sometimes a type name is required in a declaration before it's been declared. Consider the following Pascal type declarations:
TYPE ptr1= ^r; ptr2= ptr1; ptr3= ^r; k= integer; r= real;

Note that ptr1 refers to a type r that isn't declared until several more lines have been parsed. All the compiler knows about r is that it must have some type. Also, ptr2 refers to ptr1, which is also only partially defined. Then, ptr3 is another pointer declaration referring to r, which is still not declared. The type r is finally declared in the last line above. These are examples of forward references and they are considered legal in Pascal. However, a forward reference can only be used with a pointer declaration. Thus, the following declarations are illegal:
TYPE a= b; b= c: c= real;

{ illegal!! }

None of these references involve the pointer operator. The programmer can easily change their order to obtain a legal declaration set, as follows:
TYPE c= real; b= c: a= b;

Changing the order with pointer references to avoid a forward reference is often impossible. For example, one often refers to a record type by a pointer to that type contained within the record, like this:
TYPE r= record next, prev: ^r; value: real; end;

Here, it may seem that the record structure is declared before the next and prev references are required, but that's not the case in most compilers. Only after all the record fields are scanned is the record itself considered declared. So the next and prev references are to an unknown type.

Chapter 12: Type Declarations and Type Checking, page 328

One may also have two record structures containing pointers to each other, for example:
TYPE parent= record childp: ^child; end; child= record parentp: ^parent; end;

There's no ambiguity in this, provided the algorithm using these pointers is aware of the possible circularity. Nor is there any way to reorder these declarations in order to avoid a forward reference.

Pointer Declarations Refer to Names


When a pointer type is declared, it's sufficient to have the pointer refer to a name, rather than to a more general type structure. For example, we prefer p to q or r in the following:
TYPE p= ^name; q= ^array [0..15] of integer; r= ^array [0..15] of integer;

A type declaration for a pointer to a general type (such as q or r above) turns out to be relatively useless in Pascal. There's no way to set such a pointer to another one, since types are compared by name equality and not structure equality. In the previous declaration, pointer q and pointer r appear to be equivalent, since their type declarations are identical. Unfortunately, a Pascal compiler will consider them different, because they are not pointers to the same type name. Heres where the difficulty arises:
VAR vq: q, vr: r; q := r; { type violation!! }

The reason that the assignment of r to q is illegal is that Pascal considers these to be two separate types, even though the type declarations are identical. To repair this problem, the programmer is expected to declare the array as a type, then declare pointer types (or vars) using the new name, like this:
TYPE p= a= q= r= ^name; array [0..15] of integer; ^a; ^a;

The pointers q and r are now considered equivalent, since they point to the same type name, a. Then we can legally write the following:
begin q := r; end {pointer assignment}

using name equivalence, but not if the pointers are only structure equivalent. Pointer equivalence is important due to the strong type checking of Pascal. Without type equivalence, it's impossible to declare several pointers to the same object, assigning them to each other, passing them a value parameter, etc. Because of name equivalence, the syntax of a declaration involving the ^ operator requires an identifier following the ^, not some more general type structure. Also, the identifier must eventually be resolved to a type name.

Chapter 12: Type Declarations and Type Checking, page 329

Summary
These considerations give rise to the following observations and rules followed in most compilers: Every variable name is assigned to a type in a declaration. The variable name is said to be declared in the declaration. The type is said to be referenced in the declaration. Types are abstract in nature, and result in no memory allocation at runtime. However, they are very useful as a means of relating different variables for the sake of special operators on the variables. Literals (numbers, strings, powersets) are also assigned types by the compiler, usually based on their syntax or size. There a small number of basic types defined by the language, that require no explicit type declaration by the programmer. In Pascal, these are integer, real, boolean, and in some implementations text, string and powerset. Compound types can be constructed by a programmer from the basic types and other constructed types. These take the form of: subrange, enumerated type, array, function, pointer, and record types. If the associated type itself requires a declaration, that declaration should precede its first reference. An exception is made for the pointer type, which sometimes must be referenced prior to its declaration. A variable is always associated with a type. Unlike types, variables consume memory space at runtime. Every variable should be referenced somewhere in the program code. The variable's declaration should precede its first reference. A type name may be referred to in many other type declarations or variable declarations. These rules clearly imply an ordering of declarations: type declarations should precede variable declarations. Also, since constants are often required in both type and variable declarations, but not vice versa, constant declarations should precede both type and variable declarations. These considerations led Jensen and Wirth to enforce the strict ordering rule: const type var in their Pascal declarations. More modern Pascal compilers permit intermixing of these declarations.

Carrying Types in a Compiler


The design task posed of any compiler designer is how should one carry variable and type information at compile time? It's clear from the previous examples of declarations that the symbol table must be involved. When a type is declared, some identifier must be entered in the symbol table. It must carry an attribute that describes its associated type. That identifier will likely appear later in the program. When looked up in the symbol table, it should have an attribute that fully describes it, based on just how it was declared. Since types can refer to other types, and those to yet other types, ad infinitum, it's clear that we need a support mechanism consisting of objects containing pointers to other objects. For example, if N is declared like this:
TYPE N= array[0..15] of real;

then we need a symbol table object carrying the name N, which will be linked to some sort of type

Chapter 12: Type Declarations and Type Checking, page 330

object that describes an array. A typical array type will in turn depend on a type object for its dimension and for the array element. In this case, the dimension type must describe the 0..15 subrange, and the array element type must refer to the simple type real. A simple diagram of this plan is given in figure 1. Recall that a Csymbol is carried in a symbol table as the attribute of the symbol name N. The attribute should be associated with a type. That's done by providing a typep pointer in class Csymbol. typep is a pointer to a Ctype class. (We will show the identifier name in each Csymbol box for the sake of readability. In fact, it resides in a base class carried in the symbol table, not in the Csymbol class). A Ctype object in turn may carry pointers to other Ctype objects. Here, the Csymbol object Ctype points to a CarrayType object, which is one of Csubrange CarrayType "N" several different derived classes of Ctype. indexType That in turn has an index pointer (indexType) typep 0..15 elmntType and an element pointer (elmntType) to other Ctype objects (a Csubrange and a Csimple object, respectively). It's clear that we can Ctype create type structures of arbitrary complexity Fig. 1. Description of an in this way. Csimple array type through objects For example, if the element type were real connected by pointers. something more complicated, we only need to create a pointer structure that represents that type. Usually, variables are declared as either simple types or type names that have been declared previously. So it's only necessary to look up a type name in the symbol table, and pull its typep pointer in order to construct a new type. The Ctype class is defined in file pascal5/types.h. The Csymbol class is defined in file pascal5/csymbol.h. The companion cpp files are in the same directories with the same surname. We urge the reader to examine these files while reading the discussion in this section.
Csymbol Ctype

We can summarize our approach as follows. The abstract base class Ctype can represent any type or combination type that we require, including the base types, all compound types, and literals. There are several derived classes of Ctype, one for each variety of type. Thus there will be a derived class for the simple types, one for arrays, another for records, etc The derived classes of Ctype often need to be distinguished through a Ctype pointer. In the example in the figure above, the CarrayType object carries two Ctype pointers, one for the index and the other for the element. These must be Ctype, not the type of the derived class, since any of several different types can be an index, and virtually any type can be an element. We've therefore introduced a pure virtual function classCode in the Ctype class. This returns an integer code that uniquely identifies the derived class. For example, classCode() returns CSUBRANGE if the derived class is a Csubrange type. This is in fact a form of runtime type checking. The use of any identifier name in a type, as a reference, a field name, etc., will be carried in a symbol class Csymbol. This in turn will carry a pointer typep to a Ctype object. Also, some of the Ctype classes will carry one or more pointers to Csymbol objects. We will sometimes use an ANSII vector or list to carry a set of Ctype objects. Ctype and

Chapter 12: Type Declarations and Type Checking, page 331

Csymbol will usually carry only a few pointers to other objects of these types. We'd also like to avoid using a pointer as a reference to two different types of object, i.e. using one pointer to alternately refer to a Ctype or a Csymbol. This would be considered a type violation in the compiler's host code, and would likely lead to some nasty bugs. So if we declare a Ctype pointer in some object, it will only be used to point to a Ctype object, not some other object. However, we will permit a Ctype pointer to point to any of several different derived classes of Ctype. Some of the Csymbol objects will be linked into a symbol table that can be searched when user identifiers appear in the program. Others will not be in the symbol table, or will only be in the table within certain scope intervals. In general, this is how the compiler will be able to determine the types of variables that appear in the source program, by (a) looking up the variable's name in the symbol table, and then (b) using its typep pointer to examine the name's type. The compiler needs to create Ctype objects for each of the basic types, perhaps including their names in the symbol table. Thus integer, char, real, and boolean need to be "predefined" in a Pascal compiler. Also, true, false, and nil need to be predefined.

We must also access the object structures in a systematic way when checking types in other declarations and in executable statements. The type rules of Pascal appear to be simple and straightforward, but there are a few exceptional situations that require considerable code to implement correctly. In general, the compiler source code that supports the type declarations and type checking is by far the most complicated and voluminous in the compiler.

The base class Ctype


A complete listing of the Ctype abstract base class and each of the derived class types can be found in file pascal5\types.h. We will describe only certain highlights in an effort to summarize the development of the type mechanism in a compiler. The Ctype class header is given below, in a simplified form.
class Ctype: public Csem, public Csemtype { Ctype *tnext; // linked list of these things public: Ctype(void); Ctype(const Ctype& type); Ctype(semType semt); virtual void printType(ostream& out) const= 0; virtual int getSize(void) const= 0; virtual void dump(ostream& out) const=0; virtual pasType getPtype(void) const= 0; virtual int getLower(void) const= 0; virtual int getUpper(void) const= 0; virtual int classCode(void) const= 0; int isStringType(void) const; };

This carries only two data members of any significance. One is tnext, which supports linking a set of Ctype objects into a common linked list. We will use tnext for only one purpose--to keep track of all the Ctype objects allocated from the heap, so that they can later be released en masse.

Chapter 12: Type Declarations and Type Checking, page 332

The other data member is in class Csem. It carries a classifier tag semt of type semType, which we'll explain in another section. In the meantime, we can think of semt as an enumerated type that provides a fine classification of the type. getSize returns the number of bytes required to carry a variable of this type in memory. This is a virtual function, and one that works out the total size by calls on various derived-class functions. For example, the size of an array type is computed by multiplying the number of elements by the size of each element. getPtype classifies the type through the enumerated type pasType. This classification is very useful in checking the legal type combinations used with the various operators. Here is the definition of pasType, as found in pasTypes.h:
typedef enum {tOTHER=0, tBOOL, tCHAR, tINT, tREAL, tSTR, tSET, tENUM, tSTRUCT, tPNTR} pasType;

tBOOL marks this object as a Pascal boolean type, with legal values TRUE or FALSE. tCHAR marks this object as a single character type. tINT marks this object as an integer type. We use a 16-bit integer form at runtime for an integer. tREAL marks this object as a real (floating-point) type. We use a 32-bit floating-point form at runtime. tSTR marks this as a string type. There's no direct declaration for a string, but any array of char is interpreted as a string type. tSET marks this as a powerset. Pascal has a specific declaration for powersets, and several operators. tENUM marks this as an enumerated type. These have an associated list of names associated with the type. tSTRUCT marks this as an arbitrary record or array structure. There are a few ways in which such structs can be assigned to or passed as a function parameter tPNTR marks this as a pointer to some other type. tOTHER means that this object has some "other" type

getLower, getUpper carry subrange limits or array dimensions. They are meaningful only for certain kinds of type. classCode is used to determine the derived class for a given type. The problem we often face is that we have a Ctype pointer, and we need to determine which derived class is associated with it. This is a virtual function that is supplied by the derived class. It returns an integer code that specifies the derived class. The code is carried in a name that is the same as the derived class type, except in capital letters. Thus the classCode for CarrayType is CARRAYTYPE. dumpHeader, printType, dump are used for diagnostic purposes, stack dumps and the like. printType is used to print a reasonable symbol table of the symbols declared in a program.

Using Ctype
Since Ctype is an abstract class, it cannot by itself be instantiated. Only one of its derived classes can be instantiated. All such objects are allocated from the heap, which simplifies garbage collection. (Otherwise, we'd need a tag that states whether this particular object should be deleted or not). Whenever a Ctype object is allocated, it's also added to a linked list of all such objects. The global

Chapter 12: Type Declarations and Type Checking, page 333

variable typeList points to this list. The compiler will never delete a Ctype object, except at the end of the compilation, just before deleting the main compiler object. Although a long program may accumulate a long list of types, we feel that this is a reasonable strategy. Once a Ctype object is created and bound to a name, that object will not be copied. Thus many other types and variables may point to it. This also makes it easy to compare two types for equality, by just comparing their pointers. The fact that a particular Ctype may be pointed to by many other objects makes their deletion difficult. We've chosen to delete them en masse through their typeList pointer at the end of compilation, rather than try to manage some kind of reference counting or other scheme.

The semType Enumerated Type


The enumerated type semType is heavily used in the student compiler to sort out tokens and production rules. semType is carried in a special class CsemType. Here's a typical CsemType class. This is in file semtype.h as generated from the calc.grm grammar found in directory calc:
// This carries the semType enumerated type class Csemtype { public: typedef enum { /* 0*/ OTHER, ERROR, IDENT, // identifier RESWORD, // reserved word CHAR, /* 5*/ UCHAR, SHORT, USHORT, INTEGER, UINTEGER, // fixed-point numbers /* 10*/ LONGINT, ULONG, FLOAT, // floating-point numbers DOUBLE, CHARACTER, // quoted character /* 15*/ STRING, // quoted string SPECIAL, // special token EOLTOKEN, // end of line token EOFTOKEN, // end of file token CCODE, // C code sequence /* 20 */ DEBUG, // debugger token GENL_KIND, ASSIGN, COMP, DIVIDE, /* 25 */ EQ, GE, GT, IFTHEN, INTVAL, /* 30 */ LE, LT, MINUS,

Chapter 12: Type Declarations and Type Checking, page 334

MPY, NE, /* 35 */ PARENS, PLUS, PRTVAL, QUIT, REALVAL, /* 40 */ UMINUS, VARIABLE, BOOLEAN, LAST_FLAG} semType; Csemtype(void) {} Csemtype(const Csemtype &cp) {} };

The semType enumerated type is a composite of several different kinds of flag. Flag OTHER is used for any default situation in which "no flag" seems the only appropriate choice. Flag ERROR is used to mark the object as containing some kind of error. An object so marked can be ignored, and can be used to mark any parent objects an ERROR as well. Flag IDENT is used to mark a token as an Identifier, as understood by the lexical analyzer. Flag RESWORD marks a token as a reserved word of some kind, for example, any of the tokens
FOR BEGIN END IF THEN etc.

are marked RESWORD. A reserved word resembles an identifier, but is reserved in the language. The flags CHAR through DOUBLE are used to classify a literal number. These are assigned by the lexical analyzer when the number is scanned, and are based on the magnitude of the number along with any special tags. Flag CHARACTER is used to mark a single quoted character. (It's not used in Pascal). Flag STRING is used to mark a string consisting of zero or more sequential characters. Flag SPECIAL is used to mark such tokens as
= + * /

These are also defined by the language. Flag EOLTOKEN marks this as the end-of-line token. Flag EOFTOKEN marks this as the end-of-file token. Flag CCODE is used in the parser for the syngraph system. It represents a section of C or C++ code enclosed in matching braces, i.e. { ... }. Flag DEBUG is used to mark some specially defined token that trips a compile-time debugger into action. Flag GENL_KIND is used to tag any production rule that lacks a specific flag. All of the above semType definitions are defined and used with any grammar. Most of the remaining semType definitions come from a specific grammar, and are the production rule flags in alphabetic order. Thus ASSIGN, COMP, DIVIDE, EQ, ... VARIABLE are associated with the production rules found in calc.grm. Flag BOOLEAN is defined in calc.grm in a "Newtags" statement. This flag is required internally by the compiler, but is not associated with any production rule or token. Flag LAST_FLAG is guaranteed to be the last semType flag. This facilitates creating a finite set or array based on the number of flags found in this list.

Chapter 12: Type Declarations and Type Checking, page 335

Constants and Literals


All literal constants are carried by the type Cliteral, derived from Ctype. This class, like some of the others, is fairly complicated, as it must serve several roles:
#define CLITERAL 27 class Cliteral : public Ctype { // holds a literal constant // NOTE: in this implementation, we have no literal sets, only // those constructed at runtime. pasType ptype; union { double dvalue; // tREAL long int ivalue; // tINT, tCHAR, tBOOL, tENUM Cset *set; // tSET }; string svalue; // tSTR const CenumItem *enumBase; const Ctype *setBase; public: Cliteral(long int iv, pasType pt) : Ctype(INTEGER), ptype(pt), ivalue(iv), enumBase(0), setBase(0) {} Cliteral(double dv) : Ctype(DOUBLE), ptype(tREAL), dvalue(dv), enumBase(0), setBase(0) {} Cliteral(const string& sv) : Ctype(STRING), ptype(tSTR), svalue(sv), enumBase(0), setBase(0) {} Cliteral(Cset *setv, const Ctype *base) : Ctype(SETCONST), ptype(tSET), set(setv), enumBase(0), setBase(base) {} Cliteral(CenumItem *it) : Ctype(INTEGER), ptype(tENUM), ivalue(it->getLower()), enumBase(it), setBase(0) {} virtual ~Cliteral(void); double getDouble(void) const; long int getInteger(void) const; boolean getBoolean(void) const {return (boolean) ivalue;} const string& getStringValue(void) const; const Cset *getSet(void) const {return set;} void setValue(long int v); const Ctype *getBaseType(void) const; void setPtype(pasType p) {ptype= p;} void setInteger(long int i) {ivalue= i;} void setDouble(double d) {dvalue= d;} void setString(const string& str); virtual bool isCharArray(void) const; virtual void printType(ostream& out) const; virtual int getSize(void) const; virtual void dump(ostream& out) const; virtual pasType getPtype(void) const {return ptype;} virtual int getLower(void) const {return getInteger();} virtual int getUpper(void) const {return getInteger();} virtual int classCode(void) const {return CLITERAL;} };

Chapter 12: Type Declarations and Type Checking, page 336

The define CLITERAL is used in the classCode function. This is what marks this particular variant on a Ctype as a Cliteral. All the virtual functions required by the pure virtual functions in Ctype are defined in this; some with relatively simple definitions. This class is designed to carry a ptype classifier and one of several different kinds of literal constant. The particular constant is carried in the union struct, and must be identified through the ptype classifier. The size in bytes of this literal is worked out in the member function getSize. This object may be constructed in a variety of different ways, as indicated by the constructors. It will normally be constructed through some combination of syntax rules that describe a literal. Of these, a numeric or string literal is the simplest, as they are collected by the lexical analyzer. However, there are other syntax rules that define the formation of a powerset literal, and these must be shaped into a single Cset object for this class. The destructor notes whether a set is carried; this literal is carried through a heap allocation and requires a delete call eventually. There are various helper functions provided. For example, if the ptype is tENUM, then the function getEnumBase returns a pointer to the enumerated type's base class Ctype. Also, for a string, getStringValue returns a string reference, and may be called many times.

Simple Types
All the simple types (char, boolean, integer, real) are carried by the Csimple class, defined below:
#define CSIMPLE 16 class Csimple: public Ctype { // a simple variable pasType ptype; public: Csimple(pasType pt); virtual void printType(ostream& out) const; virtual pasType getPtype(void) const {return ptype;} virtual int getSize(void) const; virtual void dump(ostream& out) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const; virtual int classCode(void) const {return CSIMPLE;} };

The general form of the object is specified in the ptype parameter, and also in the semt parameter carried by the base class Csem. ptype is supplied by the constructor. The size of this object depends on ptype, and is worked out in the external member function getSize. This in turn depends on a constant array sizeofType found near the top of file types.cpp.

Chapter 12: Type Declarations and Type Checking, page 337

Example of Simple Type Declarations


Figure 2 below shows how simple and literal types are used to declare the built-in types integer, real,

A Csymbol skind= sTYPE "integer" typep

Csimple ptype= tINT

C Csymbol skind= sTYPE "real" typep

D Csimple ptype= tREAL

E Csymbol skind= sTYPE "boolean" typep

Csimple ptype= tBOOL

G Csymbol skind= sTYPE "char" typep I Csymbol skind= sCONST "true" typep K Csymbol skind= sCONST "false" typep

H Csimple ptype= tCHAR

Cliteral ptype= tINT ivalue= 1

Cliteral ptype= tINT ivalue= 0

Fig. 2. Csymbol and Ctype structures used to support the boolean and char, also the Boolean literals true and false. Each of the boxes in figure 2 represents an object created at compile-time. They are labelled A, B, C, etc. so that later figures in this chapter can refer to these objects. A Csymbol object will in general be linked into a symbol table, and must carry a string name. It carries skind, a name, and a typep. The skind field is an enumerated type that describes the object in a general way. The typep field carries a pointer to an object derived from Ctype. Thus, box A and its companion Ctype box B describe the built-

Chapter 12: Type Declarations and Type Checking, page 338

in type integer. These are called built-in, because the compiler will set these up before any program statements are parsed. In that way, the appearance of a built-in identifier, for example integer, will be discovered as a type, and have a Csimple type object associated with it. The identifier false will be discovered as a constant, with the integer value 0. Figures 3, 4 and 5, given later, will illustrate some type and var declarations that depend on these and each other, as we'll see.

Subrange Type
The Csubrange class defines a subrange object. This just carries a lower and upper integer value, as can be seen from the class definition below:
#define CSUBRANGE 17 class Csubrange: public Ctype { int vlower, vupper; public: Csubrange(int cl, int cu) : Ctype(SUBRANGE), vlower(cl), vupper(cu) {} virtual void printType(ostream& out) const { out << "Csubrange(" << vlower << ".." << vupper << ')'; } virtual int getLower(void) const {return vlower;} virtual int getUpper(void) const {return vupper;} virtual pasType getPtype(void) const {return tINT;} virtual int getSize(void) const {return SIZEOFINT;} virtual void dump(ostream& out) const; virtual int classCode(void) const {return CSUBRANGE;} };

These limits must be supplied in the constructor. Of course, getLower and getUpper now return something sensible. A subrange type is always a subset of the integer class, and will always be carried in a 16-bit field. Its size is therefore the size of an integer, SIZEOFINT. (This could be optimized so that a small subrange might be carried as a byte).

Enumerated Type
An enumerated type class CenumType describes the type of a set of names, not any particular name. (However, each such name is considered to be of this type). It must therefore carry a list of names, which we carry through a list of Csymbol class objects. Pointer head points to the first of these names, and the Csymbol pointer next points to the next one, etc. Parameter count is the total number of names associated with this particular type. An enumerated type object is carried as a 16-bit integer, and this has the size SIZEOFINT. Also, getLower returns 0 and getUpper returns count-1, as might be expected. The class CenumItem is made a friend of this one to facilitate setting pointers and other data values.
#define CENUMTYPE 18 class CenumType: public Ctype { // this comprises a chain of names associated with the // enumerated type friend CenumItem;

Chapter 12: Type Declarations and Type Checking, page 339

Csymbol *head; int count; public: CenumType(Ceval *idlist); int getCount(void) {return count;} // how many in this list? virtual void printType(ostream& out) const { out << "CenumType[" << count << ']'; } virtual pasType getPtype(void) const {return tENUM;} virtual int getSize(void) const; virtual void dump(ostream& out) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return count-1;} virtual int classCode(void) const {return CENUMTYPE;} };

Although CenumType points to a list of Csymbol objects, each Csymbol object carries a Ctype pointer. This points to a CenumItem object, defined as follows:
#define CENUMITEM 19 class CenumItem: public Ctype { // this is one enumerated type name, essentially like a // constant integer CenumType *parent; // the parent type int value; // this value const string name; // name in the symbol table public: CenumItem(CenumType *p, int v, const string& pn) : Ctype(VIDENT), parent(p), value(v), name(pn) {} int getValue(void) {return value;} const CenumType *getParent() const {return parent;} const string getName() const {return name;} virtual void printType(ostream& out) const { out << "CenumItem(" << name << "=" << value << ')'; } virtual pasType getPtype(void) const {return tENUM;} virtual int getSize(void) const {return SIZEOFINT;} virtual void dump(ostream& out) const; virtual int getLower(void) const {return value;} virtual int getUpper(void) const {return value;} virtual int classCode(void) const {return CENUMITEM;} };

This carries a pointer back to its parent CenumType object. It represents one of the enumerated type names. The specific name is carried. The associated integer value is in value. The size, lower and upper limits and other attributes should be clear from the class definitions. This may seem like a very roundabout way of describing something as simple as a sequence of names. However, each of the names and the name of the enumerated type itself (if there is one) will have to be carried in the symbol table so that they can be referenced later. We also want to carry the relationship of any one enumerated type name to its parent type, as we've done here, for the sake of later type checking. We would also like a quick way of finding the numeric equivalent of any of the enumerated types, and that's carried in the CenumItem variable value.

Pointer Type
The CpntrType class is quite simple. However, it's the first one that carries a pointer to another Ctype class, its pointer base class: Chapter 12: Type Declarations and Type Checking, page 340

#define CPNTRTYPE 20 class CpntrType: public Ctype { Ctype *baseType; Csymbol *nameLink; // used for forward referencing public: CpntrType(string& str); CpntrType(void) : Ctype(OTHER), baseType(0), nameLink(0) {} Ctype *getBaseType(void) {return baseType;} void setBaseType(Ctype *bt) {baseType= bt;} void fixReferences(Ctype *newType); Csymbol *getNameLink(void) const {return nameLink;} void setNameLink(Csymbol *np) {nameLink= np;} virtual void printType(ostream &out) const { out << "CpntrType"; } virtual pasType getPtype(void) const {return tPNTR;} virtual int getSize(void) const {return SIZEOFPNTR;} virtual void dump(ostream& out) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return 0;} virtual int classCode(void) const {return CPNTRTYPE;} };

Here, the datum baseType points to the type to which this pointer refers to. The pointer declaration will always make this clear. The nameLink pointer is used to enable the compiler to resolve forward pointer references. The problem here is that we may have to create a CpntrType object without knowing the type of the referenced name. However, we know the name, and can create a Csymbol object that carries the name. The nameLink pointer refers to that object, and can be used to fill in the (missing) baseType pointer after all the type names are resolved. The size of this object at runtime is the size of a pointer, SIZEOFPNTR, which is 16 bits in this implementation.

Chapter 12: Type Declarations and Type Checking, page 341

type ent = (red, blue, green); M Csymbol skind= sTYPE "ent" typep N CenumType head count= 3 P Csymbol skind= sCONST "red" typep next Q Csymbol skind= sCONST "blue" typep next R Csymbol skind= sCONST "green" typep next= 0

CenumItem parent value= 0 name

CenumItem parent value= 1 name

CenumItem parent value= 2 name

sr= 8 .. 47; V Csymbol skind= sTYPE "sr" typep src= sr; X Csymbol skind= sTYPE "src" typep W Csubrange vlower= 8 vupper= 47

Fig. 3. Csymbol and Ctype objects for an enumerated type, a subrange and a type equivalence.

Example of Enumerated Type and Subrange Declarations


Figure 3 (above) shows how a typical set of type declarations are supported by Csymbol and Ctype objects. An enumerated type declaration (boxes M, N, P, Q, R, S, T, U) clearly requires many objects to support the declaration
type ent= (red, blue, green);

The enumerated type declares name "ent" as a type consisting of the three enumerated constants red, blue, green. We clearly need a Csymbol object for the name "ent" (box M). It points to a CenumType object (box N), which is derived from Ctype. This carries a pointer head to a linked list of Csymbol objects (boxes P, Q, R). It also carries a count of the enumerated constants, count. Each of the enumerated objects (P, Q, R) points to a CenumItem object (boxes S, T, U). This defines the object as a member of an enumerated type. Each CenumItem carries the assembler-level value of its Chapter 12: Type Declarations and Type Checking, page 342

enumerated item; thus "red" is assigned value 0, "blue" is assigned value 1 and "green" is assigned value 2. Figure 3 also shows a subrange declaration,
type sr= 8..47;

It requires two objects, V and W, as shown. Object V is (as usual) a Csymbol object, which carries the name "sr", and is linked into a global symbol table. The associated Ctype object is the derived class Csubrange, and it carries the lower and upper bounds of the subrange (box W). Our compiler will treat a subrange as though it were an integer. The last declaration in Figure 3 is a type equivalence:
type src= sr;

This just supplies a new name to an existing type. The name "sr" must be found in the symbol table, else there's an error. the Csymbol object discovered (through a symbol table search) will be object V, which points to the type object W. Therefore the new Csymbol object X will have to point to W. Object X will also have to be linked into the symbol table, of course.

Set Type
Pascal supports a powerset through a declaration such as the following:
VAR pset: SET OF 0..25;

This particular declaration allocates space for a set of 26 elements, each of which is an integer in the range 0..25. Any of the 26 elements may either be in the set or not, in any combination. Only small integers are considered as powerset elements in our implementation. A set of subrange is also permitted, if the subrange corresponds to small integers. A powerset is carried at runtime by an array of bits whose dimension is equal to the largest element. Thus, here we need to carry 26 bits, which requires 4 bytes (4 x 8 = 32 bits, but 3 x 8 = 24 is too small). In addition, the first byte will carry the maximum number of elements that this set can legally carry, which will be a 26. Thus this power set looks like this at runtime: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 26 1 1 0 0 0 0 0 0 position Each of the bytes following the first one carries 8 bits, indicating which of the elements are in the set. For example, if element 0 is in the set, then the second byte will be 1000000b (binary). If elements 0 and 1 are the in the set, the second byte is 11000000b (as shown). If element 8 is in the second, then the third byte is 10000000b (as shown). The membership of element N in the set is therefore marked by setting the Nth bit in this array of bytes, considered as an array of bits. The Pascal Report proposes a limit of 256 members in any one set. This is clearly needed in order to constrain the memory required for sets. Since 256 is too large to carry in the first byte, we have adopted the convention of setting the first byte of a powerset to one less than the maximum. This byte is supported by the set functions, and not the user, hence can be arbitrarily defined. A powerset is supported by a battery of assembler and C functions at runtime. A complete description of these functions can be found in the assembly file aservice.asm and the C file service.c, found in the lib directory. Our student Pascal supports powerset operations through a number of operators as follows:
0 1 2 3 4 5 6 7 1 1 1 1 1 1 8 9 0 1 2 3 4 5 1 1 1 1 2 2 2 2 6 7 8 9 0 1 2 3 2 2 2 2 2 2 3 3 4 5 6 7 8 9 0 1

Chapter 12: Type Declarations and Type Checking, page 343

Operator A+B A*B -A A=B A<B A <= B A <> B A>B A >= B A := B i IN A

Description Set union of sets A and B Set intersection of A and B Set complement of A Set A is identical to set B Set A is properly contained in B Set A is contained in B Set A is not identical to B Set B is properly contained in A Set B is contained in A Set B is copied into set A Element i (an integer) is in set A

In addition, Pascal supports a special form for the creation of a set:


[ 5, e+3, 9, k ]

This is a list of expressions that evaluate to integers, in general. By placing brackets around the list, we form a set of these integers. Notice that the elements of the set can be expressions evaluated at runtime, in general. The type of a literal set must be inferred from its members at runtime. However, none of the elements can legally be greater than 255. The set functions will constrain this at runtime. Powerset literals are supported by a Cset entry in the Cliteral class. (Cset can be found in file lib\sets.h. It provides a flexible mechanism for creating and operating on powersets.) Powerset types are supported by the CsetType class, given below:
#define CSETTYPE 21 class CsetType: public Ctype { int members; int offset; Ctype *elmntType; // type of each element public: CsetType(void); CsetType(Ctype *btype); // declare a set from SET OF <type> // if constant, yield a set pointer (this deletes it!) virtual ~CsetType(void); virtual void printType(ostream &out) const { out << "CsetType"; } virtual pasType getPtype(void) const {return tSET;} virtual int getSize(void) const {return 2+members/8; } int getOffset(void) const {return offset;} int getMembers(void) const {return members;} Ctype *getElmntType(void) {return elmntType;} virtual void dump(ostream& out) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return members-1;} virtual int classCode(void) const {return CSETTYPE;} };

The data members carry all that needed regarding the set. members is the maximum number of

Chapter 12: Type Declarations and Type Checking, page 344

declared set members. offset is one less than the minimum member value. elmntType is a pointer to the type of each element. The set declaration can take several different forms, as follows:
SET SET SET SET SET OF OF OF OF OF char; boolean; subrange; enumeratedType; 8..56;

Although each of these ultimately must resolve to a set containing no more than 256 elements, the associated types are different. We need to keep track of the element type in order to perform type checking on literal sets and elements to be added to sets. The size of a powerset is the number of bytes required to support it at runtime. This turns out to be the members divided by 8 plus 2. Some of the bits in the last byte may not be used. When a set is allocated in memory, the first byte must be set to its maximum size. This requires some runtime code in certain cases.

Powerset Runtime Operations


The runtime operations required on a powerset are fairly sophisticated. No Pentium instructions support these operations directly. We therefore support each powerset operation through a runtime function. Most of these are written in C, and can be found in the file lib\service.c. Since C functions require special setup code, cover functions written in assembler are provided, and can be found in lib\aservice.asm. In general, powerset operations are supported by a virtual machine that carries a stack of sets. (The same stack is used for string operations). A binary powerset operation, such as union, is supported like this in assembler:
; 156: sc3 := sc1 + sc2; lea ebx,SC1_138 call pushSet lea ebx,SC2_139 call pushSet call setUnion lea ebx,SC3_140 call setAssign

This example can be found in file pascal5\t1.pas. Here, sc1 and sc2 are declared as powersets. The lea/call instructions form the address of a set and push it into the powerset stack. When call setUnion is executed, the two sets are in the stack. This function forms the set union, leaving it on the stack top. The last lea/call instruction pair causes the stack top to be popped and copied into the memory address found in bx. These functions are safe in the sense that they pay attention to the capacity of each set. It happens that set sc3 is a set of char, with a maximum of 256 elements. Other sets may be much smaller, and it's important that the set functions protect the memory space beyond the allocation limit of the set.

String Type
Pascal supports a string literal as an arbitrary sequence of characters enclosed in single-quote marks, for example:
'here is a string of some length'

A quote mark can be embedded in such a string by duplicating it. Thus


''''

represents a string consisting of one single-quote mark. Chapter 12: Type Declarations and Type Checking, page 345

A special string type is provided in our student Pascal that supports variable-length strings (up to a limit of 255 characters). For example,
VAR s1: string;

declares s1 as a variable-length string with a maximum length of 255 characters. This is supported at runtime as an array of char of length 257. The first byte carries the maximum length of the string, and a given string is terminated by a null byte. Since we'd like to combine strings of different sizes, it's important to keep track of the dimension of each variable string. We choose to do this through the first byte of the array; this will hold the maximum possible number of string characters, which will be 2 less than the array dimension. The first byte carries the maximum size, and we allow a byte for a null character, marking the end of any one particular string. The null character also makes it easy to write C functions to concatenate, read or write strings. The string functions supported by Pascal are as follows: String operation A+B A=B A<B Description Concatenate A and B True if A is equal to B True if A is alphabetically "less than" B True if A is less than or equal to B A <= B True if A is not equal to B A <>B True if A is "greater than" B A>B True if A is greater than or equal to A >= B B Copy string B into string variable A := B A A number of built-in string functions are also specified in the Pascal Report [9]. These find substrings, delete portions of a string, etc. Special functions can also be written by any user. Strings can also be intermixed with, or assigned to, an array of char. The supporting string functions are robust with respect to string lengths, in order to protect memory beyond the allocated size. They can be found in the files lib\service.c and lib\aservice.asm. The following assembler fragment is an example of how two string variables are concatenated. As with powersets, the string operations are supported by a virtual stack machine. The two strings are pushed into the stack, then concatenated, then assigned to the target. These operations are not optimized for time performance, but for space. There are several special string operations available in the 80x86, but these do not provide bounds checking, and also require several setup instructions each.
; 141: s3:= lea call lea call call lea call s1 + s2; { 8 bytes } ebx,S1_131 pushString ebx,S2_132 pushString strConcat ebx,S3_133 strAssign

The concatenation of string constants is done within the compiler, requiring only a pushString followed by a strAssign to achieve an assignment.

Chapter 12: Type Declarations and Type Checking, page 346

File Type
Pascal supports several file functions, and file declarations. These in turn require a CfileType type class, as follows:
#define CFILETYPE 22 class CfileType: public Ctype { // this is actually a file of char Ctype *ftype; // what this is a file of public: CfileType(Ctype *ft); Ctype *getType(void) { return ftype; } virtual void printType(ostream &out) const { cout << "CfileType"; } virtual pasType getPtype(void) const {return tOTHER;} virtual int getSize(void) const {return SIZEOFPNTR + ftype->getSize();} virtual void dump(ostream& out) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return 0;} virtual int classCode(void) const {return CFILETYPE;} };

Pascal considers a file to be a sequence of objects of a certain type. For example, a record whose size is 79 bytes (for example) can be read or written in binary mode as a sequence of 79-byte objects. A file of char can be considered a text file. In fact, a special declaration for a text file is provided. A text file can be used within a read or write statement. These are designed to format various simple objects as string objects, sending their string equivalents to a file (write) or interpreting them in a text file (read). They work roughly the same as the C functions printf and scanf, except that the formatting conventions are different. In any case, a Pascal object declared as a file should be associated with a CfileType object. This contains a pointer to the element class of the file, and a few supporting functions. At runtime, a file object is carried in memory as a block of bytes structured as follows: word 0: integer file number, returned by the C open function double word 1: pointer to the file name, a Pascal-style string remaining bytes: a file buffer, large enough to carry one file element object

Record Type
As explained above, a record is a collection of mixed type declarations. Each declaration is called a record field, and looks just like any other declaration in Pascal. For example, here's a typical record declaration:
type r= record x, y: real; i, j, k: integer; a: array [5..25] of real; rsub: record k, l: integer; r: real; end; end;

The line
x, y: real;

declares two record fields of type real, one labelled x and the other labelled y. This is followed by

Chapter 12: Type Declarations and Type Checking, page 347

three integer record fields (i, j, k), then an array field a. The record field rsub is a nest record field containing three more elements, k, l, r. Each record field is assigned to a separate memory area. Since one real requires 4 bytes, and an integer requires 4 bytes, this record structure will require a total of 24*4 + 5*4= 116 bytes of memory, since it contains 24 reals and 5 integers. The Pascal compiler is free to assign these elements to the record memory space in any order. Since our target machine is a Pentium, with byte addressing, we can assign the elements to consecutive memory locations. Hence the field x will have an offset of 0 from the record origin, field y will have an offset of 4 bytes, variable i an offset of 8, j an offset of 12, etc. Note that Intel warns that accessing a 32-bit variable in memory that does not fall on an even 4-byte boundary requires a few more cycles. For maximum performance, this suggests that the record and array fields should be aligned with respect to their types a 32-bit float or int should be aligned on a 4byte boundary by padding any space with 1, 2 or 3 unused bytes. Also all records and arrays should be similarly aligned, else alignment will be lost within these structures.

Variant Record Declarations


Pascal also supports a variant record declaration, in which selected record fields share the same memory locations. This corresponds to the union struct in C. Here's a simple variant record:
TYPE date= integer; { 4 bytes } status= (married, widowed, divorced, single); { 2 bytes } person= RECORD ss: integer; { offset 0, 4 bytes } sex= (male, female); { offset 4, 2 bytes } CASE ms: status OF { offset 6, 2 bytes } married, widowed: (mdate: date); { offset 8, 4 bytes } divorced: (ddate: date; firstd: Boolean); { offset 8, 6 bytes } single: (indepdt: Boolean); { offset 8, 2 bytes } END; {person}

The person record opens with two non-variant fields, ss and sex. These are allocated at the record offsets, 0 and 4, respectively. Since sex is an enumerated type, carried as a 2-byte integer, the next available offset is 6. The field ms has the record offset 6. The type of this variable (status) must agree with the types of the next set of labels (married, widowed, divorced, single). It can be used to indicate which of the variant fields are being referred to in an application of this record. There are four variant fields declared in this record, but since they are covered by the CASE ... OF keyword (there's no matching END keyword), these fields share the same record origin offset, 8. Thus the offset of mdate is 8. So is the offset of ddate and indepdt. The point of all this is to economize on the memory space required for such a record structure. The marital status of a person makes a difference in the kind of data required in this record, and these case fields are mutually exclusive, so there's no reason that they should be given separate offsets. By using the CASE structure, the total space required by this record is 14 bytes. If all the fields were assigned to separate offsets, the space required is 20 bytes, almost twice as much. This difference in size doesn't matter much if only a few persons were to be carried in a file or in memory, but it makes a considerable difference if several thousand persons were to be carried. The syntax of the CASE structure should be clear from the above example. Each of the variant fields is assigned one or more field names (married, widowed, divorced, single). For each of these, a list of

Chapter 12: Type Declarations and Type Checking, page 348

field declarations can be written in, enclosed in a pair of parentheses. Notice that the case designator (ms) also carries some information about the person, in addition to directing the access to one specific case field. Also notice that the variable ms is only known at runtime. Thus the compiler cannot verify at compile time that a particular field access (for example, to mdate) is associated with ms = married or divorced. The user's program must verify this. However, the compiler could insert runtime validity checking for variant records, making these more safe.

Record class definitions


The record class is carried in a CrecType class, given below. This contains a pointer to a list of CrecField field objects. It also carries two byte offsets, offset and caseOffset. The integer offset is a running helper datum used during the collection phase of compiling a record structure. The integer caseOffset is the offset of each of the variant members of a CASE structure. If there's no CASE structure, then this just carries the total bytes in the record, i.e. the offset of the next available space.
#define CRECTYPE 24 class CrecType: public Ctype { CrecField *fields; // linked list of record fields int offset, caseOffset; public: CrecType(Ceval* FieldList, Ceval* CaseField); virtual void printType(ostream &out) const { out << "CrecType"; } virtual pasType getPtype(void) const {return tSTRUCT;} virtual int getSize(void) const; // computed from fields list void append(CrecField *cf); virtual void dump(ostream& out) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return 0;} virtual int classCode(void) const {return CRECTYPE;} void doFieldItem(Csymtab& fieldNames, const string& name, Ctype *type); void doFieldList(Csymtab& fieldNames, Ceval *flist); CrecField* getFields(void) {return fields;} };

Function getSize is computed from the fields list, by working through all the record fields. If there's no CASE structure, then this is just the sum of all the field sizes. If there is a CASE structure, then it computes the minimum size required to carry any of the case alternatives. Each of the CrecField objects carries its own offset, which is computed during construction of these objects, taking the CASE structures into account:
#define CRECFIELD 23 class CrecField: public Ctype { friend CrecType; CrecField *next; // linked list of fields const string& fieldname; Ctype *baseType; CrecType *parent; int offset; public: CrecField(const string& name, Ctype *btype, CrecType *p, int ofs) :

Chapter 12: Type Declarations and Type Checking, page 349

next(0), fieldname(name), baseType(btype), parent(p), offset(ofs), Ctype(FTYPEDECL) {assert(baseType!=0);} virtual ~CrecField(void); virtual void printType(ostream &out) const { out << "CrecField(" << fieldname << ',' << offset << ')'; } virtual int getSize(void) const {return baseType->getSize();} virtual void dump(ostream& out) const; virtual pasType getPtype(void) const {return baseType->getPtype();} virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return 0;} virtual int classCode(void) const {return CRECFIELD;} CrecField *getNext(void) {return next;} void setNext(CrecField *n) {next= n;} int getOffset(void) const {return offset;} Ctype *getbaseType(void) {return baseType;} const string getfieldName(void) {return fieldname;} CrecType* getParent(void) {return parent;} };

Each record field carries its own offset from the origin of the parent record. A pointer parent points to the parent record. Field next forms a linked list of these record fields. They will have the same ordering as their appearance in the source code. The fieldname is the identifier assigned to this field, and baseType is a pointer to some Ctype object that represents the field type. For an ordinary non-CASE record declaration, we create one CrecType object, and then create as many CrecField objects as are required for the record fields. In our first example above, these would be for x, y, i, j, k, a, and rsub. The rsum record declaration would cause a new CrecType object to be opened (in a recursive way), and attached as the Ctype pointer to the rsub field entry. For a CASE record declaration, we follow the same pattern, except that the case labels are discarded. Each of the field lists inside the parentheses are added to our linked list, in the order of appearance. However, their offsets are adjusted according to their positions within the CASE structure. As before, if one of these opens a new record structure, that will be recursively modeled as a new CrecType class object, with its own CrecField list. It turns out that the case labels are of no importance at runtime, unless runtime variant checking is required. We won't support that feature, but if it's required, one can simply add another integer field in the CrecField class to carry that constant. Thus in our second record declaration, given below, a CrecType object for the person record will be created. Its field member list will contain CrecField objects for ss, sex, ms, mdate, ddate, firstd, and indepdt, each with an appropriate offset. Since ss, sex and ms require a total of 6 bytes, that's the offset of the members mdate, ddate and indepdt. The offset of firstd will be 8 bytes.
person= RECORD ss: integer; sex= (male, female); CASE ms: status OF married, widowed: (mdate: date); divorced: (ddate: date; firstd: Boolean); single: (indepdt: Boolean); END; {person}

The size of a person record will clearly be 6 bytes plus the size of the largest case variant. This is the Chapter 12: Type Declarations and Type Checking, page 350

divorced variant, containing 3 bytes, yielding 9 bytes for the memory space required for any instance of this record.

Array Type
An array type class needs to carry pointers to the type of the dimension (usually a subrange) and to the type of each element (which can be any type). Here's the CarrayType derived class:
#define CARRAYTYPE 25 class CarrayType: public Ctype { int lower, upper; // index range Ctype *indexType; // type of index Ctype *elmntType; // element type public: CarrayType(Ceval *cplist, Ctype *base); // cplist: Ceval list containing // OrdinalType type objects virtual void printType(ostream &out) const { out << "CarrayType[" << lower << ".." << upper << ']'; } virtual pasType getPtype(void) const; virtual int getLower(void) const {return lower;} virtual int getUpper(void) const {return upper;} virtual int getSize(void) const {return (getUpper() - getLower() + 1) * elmntType->getSize();} int getElmntSize(void) {return elmntType->getSize();} Ctype *getElmntType(void) {return elmntType;} Ctype *getIndexType(void) {return indexType;} virtual void dump(ostream& out) const; virtual int classCode(void) const {return CARRAYTYPE;} };

The fields lower, upper carry the dimensions of this array as integers, regardless of the type of the dimension parameter-. This could also be extracted from the indexType, hence is redundant. The elmntType is a pointer to the type of each element. Arrays in Pascal can be multi-dimensional, as suggested by the following examples:
type A1= array [0..5, 3..8] of real; A2= array [0..5] of array [3..8] of real;

These two declarations are in fact equivalent. Each provides a doubly-dimensioned array of 6*6= 36 real numbers. Either declaration is carried in the compiler as two CarrayType objects. The first one has the bounds 0..5, and its element type is a second CarrayType object. The second one has the bounds 3..8, and its element type is a Csimple type associated with the type real. The size of an array in bytes (getSize)is easily computed from the size of its element type multiplied by the number of elements. The latter must be fixed at compile time, hence the size is a compile-time constant.

Examples of Record Type and Array Type Declarations


Figure 4 shows how a set of Csymbol and Ctype objects can support an array and record declaration. Consider the array declaration first:
type sra= array [sr] of real;

This refers to prior declarations shown in figures 2 and 3. For example, the type sr is described in figure 3, and the type real in figure 2. A Pascal program would of course carry these type declarations Chapter 12: Type Declarations and Type Checking, page 351

in the order shown, so that referenced types appearing on the right side of a declaration will have been declared previously. An array declaration starts with a Csymbol object, box Y. This points to a CarrayType object, box Z. CarrayType is of course a derived type of Ctype. It carries the upper and lower array bounds. These must be inferred from the array index type sr, which in turn refers to box W in figure 3. An array declaration requires a type for its index as well as each element, in indexType an elmtType. Each of these has been previously constructed. The indexType should point to box W in figure 3. The elmtType should point to box D in figure 2. These can clearly be found by searching the symbol table for the names "sr" and "real", respectively, then grabbing the associated typep pointer. These are boxes W and D, respectively. We've shown these in dashed lines to indicate that they've appeared in another figure.

Chapter 12: Type Declarations and Type Checking, page 352

type sra= array [sr] of real; CarrayType vlower= 8 Z vupper= 47 Csymbol indexType skind= sTYPE elmtType "sra" typep AB rec1= record x: integer; y: real; z: sra; end; AA Csymbol skind= sTYPE "rec1" typep AD CrecType fields offset= 0 AC

Csubrange vlower=8 vupper=47

Csimple ptype= tREAL

CrecField next fieldname= "x" baseType parent offset= 0 CrecField next: fieldname= "y" baseType parent offset= 4 B Csimple ptype= tINT

Csimple ptype= tREAL

AE CrecField next: NULL fieldname= "z" baseType parent offset= 8 Z CarrayType lower= 8 etc.

Fig. 4. Example Csymbol and Ctype objects to support an array type and a record type

The record declaration


type rec1= record x: integer; y:real; z: sra end;

requires some way of carrying each of the record field names "x", "y", and "z". These names are not entered in the symbol table, since they are effectively bound to a record variable of type rec1. Another way of expressing this idea is to realize that the names "x", "y", "z" can also be the names of types or variables in the same scope of this record declaration, so they need to be hidden from a global symbol table search. We achieve that by setting up three CrecField objects shown in figure 4 as boxes AC, AD, AE. These comprise a singly-linked list connected to the base CrecType object, box AB. Each CrecField object

Chapter 12: Type Declarations and Type Checking, page 353

carries the field identifier, its base type, its offset within the record. (The offset of the first object is 0. The second offset is 4 because an integer requires 4 bytes. The third offset is 8, since a real is 4 bytes). Each of the base types have already been declared. For example, the base type of name "x" is "integer", whose type is a Csimple object, box B in figure 2. The record type name "rec1" will be supported by a Csymbol object linked in the global symbol table, box AA. This points to box AB, a Crectype object. This in turn carries a pointer to the linked list of CrecField objects for the field name declarations. The linked list AC, AD, AE in our implementation is carried by an ANSI Each CrecField object carries a parent pointer back to its CrecType object, as shown.

Function Type
Function and procedure types are supported by a Cfunction class, whose data members are given below. (There are many member functions, which we wont discuss here):
#define CFUNCTION 28 class Cfunction : public Ctype { // a function or procedure type Csymbol* rsymp; // return symbol and type list<Csymbol*> parms; // linked list of pointers to formal parameter names // VAR parameters are going to be pointers to something int fullyDeclared; // 0 if only a FORWARD, 1 if declared int classification; // 0: user function/procedure, // 1: read[ln], write[ln] // >= 2: special handling, usually inline int varbytes; // bytes needed for its local variables list<Csymbol*> localVars; // list of local variables // (doesn't include record members) bool isFunc; int rbytes; // bytes needed in stack to support return value public: Cfunction(bool isf) : rsymp(0), fullyDeclared(0), rbytes(0), classification(0), varbytes(0), isFunc(isf) {} virtual ~Cfunction(void); virtual void printType(ostream& out) const { out << "Cfunction(" << (getRtype() == 0 ? "proc)" : "func)"); } virtual int getSize(void) const {return 0;} virtual void dump(ostream& out) const; virtual pasType getPtype(void) const; virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return 0;} virtual int classCode(void) const {return CFUNCTION;} // many member function omitted };

As is obvious, this is a complicated class. It is expected to support a number of function and procedure variations, as follows: Functions return a value, while procedures do not. This has implications for checking the validity of function calls. A prototype of a function or procedure may be declared several times prior to the full declaration. The compiler is expected to check that the prototype agrees with the actual declaration in all particulars. Chapter 12: Type Declarations and Type Checking, page 354

A list of the formal parameter name and types for the function must be carried, so that the compiler can validate actual parameters and generate appropriate conversion code as needed. Some functions are built in, i.e. known to the compiler and supplied automatically with no user declarations required. We'll simply pre-declare them as we did with the type names integer, real, boolean and char. Pascal is different than C. Built-in functions are not used in C. All functions, including standard library functions, must be declared through a prototype. The IO functions write and read can carry variable numbers of parameters, with different types. These are translated by the compiler as a special case. Pascal doesn't otherwise support functions with a variable number of parameters. A function or procedure opens a new symbol table scope. However, the new scope opens between the function name and the formal parameter list. Thus if the function name is at scope level N, the formal parameters, return value, local variables and code block are at scope level N+1. The scope level is returned to N at the end of the function's code block. The production rules that specify the function header, formal parameters, and code block are very complicated. Mapping the rule actions into these class objects through production rule semantics is also complicated.

The data members of a Cfunction object are fairly straight-forward. The datum rsymp is NULL for a procedure. For a function, it points to a Ctype object describing the return type of the function. The member fullyDeclared is a flag that will be set when a full declaration is seen. The grammar syntax cannot control the appearance of prototypes vs. actual declarations, hence we use this flag for the purpose. We wish to rule out two or more declarations of the same function, until all the prior declarations are merely FORWARD. This flag is set when an actual declaration is found, and it can be used to raise an error for any subsequent actual or FORWARD declarations of this function. The member classification provides a way of classifying this function into one of these categories: a function declared in the Pascal program, written by a user (classification = 0) the read, readln, write or writeln special function (classification = 1) a builtin function. These are specified by the classification number. Some are coded inline. Others are coded as calls to assembler functions, which may then call a C function. There are a finite number of these, as specified in the Pascal Report. The datum parms is a linked list of Csymbol objects. These are the formal parameter names and types (recall that each Csymbol object carries a Ctype pointer). It happens that these are not necessarily linked into the current symbol table, but they live in this list for the duration of the scope of their associated function. If the function header is merely a prototype (marked by the keyword FORWARD in the Pascal source code), then the list is constructed, attached to this Cfunction object, but not exposed in the symbol table. After all, the FORWARD function can be declared far ahead of the actual declaration. While the function name will be in the symbol table, its formal parameters must not be. Only when an actual declaration is seen are the formal parameters exposed in the symbol table. This is done by walking through the linked list and sending each of the Csymbol objects to the symbol table. The formal parameter symbols must be unlinked from the symbol table at the end of the function's scope, otherwise the deletion of the symbol table top will also delete them. We need to continue to carry the formal parameter linked list through to the end of the scope of the function, which may occur much later than the end of the scope of the function's formal parameters. The local variables associated with this function are also carried on the list localVars. This is needed in order to generate certain assembly code sequences. The space required for the local variables is in varbytes. This is computed as soon as the local

Chapter 12: Type Declarations and Type Checking, page 355

variable declarations are complete. Each of the local variables and formal parameters require an offset. The offset is from a stack frame register EBP. The local variables have a negative offset from EBP, and the formal parameters a positive offset. The formal parameter offsets are computed by calling a function doOffsets, shortly after all the formal parameter declarations are scanned. doOffsets of course works through the linked list of formal parameters, and takes into account the size of the stack frame as well as the sizes of the various types expected to be on the stack. Note that these sizes must exactly correspond to the number of bytes pushed on the stack prior to a function call.

Function Calling Strategy


Our strategy in calling a function, passing parameters, returning values, etc. should be reviewed at this point, as it will help understand how the Cfunction data is organized. See appendix 2 for more details of calling and returning from a function, using the runtime stack, on the 80x86 platform, also chapter 13. If this is a function (as opposed to a procedure), space for the return value is allocated on the stack. Each of the actual parameters is evaluated, from left to right, in the order of their appearance in the function call. Each parameter is matched with a corresponding formal parameter type, using the parms list. The function is called through the instruction call. Just inside the function, register EBP is pushed on the stack. This saves the previous EBP. Register EBP is set to the current value ESP. The appropriate static link array member is set to the current value of EBP. The static link is discussed in chapter 13. Space is allocated from the stack for the local variables, by decrementing ESP. Some local variables (strings and sets) require some initialization. The function code can now be executed.

Function Exit Strategy


At the end of execution of the function code, we need to perform a few operations prior to returning from the function, as follows: If this is a function, its return value may have to be copied to an appropriate place. The ESP register is set to the current value of EBP. A pop EBP instruction is executed. A ret N instruction is executed. No special instructions are required after the return.

Label Type
The Clabel type is used to support Pascal goto labels.
#define CLABEL 29 class Clabel : public Ctype { // a GOTO label, which is associated with a name in Pascal // 'referenced' means the label appeared as a marker at least once // 'defined' means the label has appeared in a GOTO at least once bool referenced, defined;

Chapter 12: Type Declarations and Type Checking, page 356

public: Clabel(void) : Ctype(LABEL), referenced(false), defined(false) {} // some member functions omitted virtual void printType(ostream& out) const { out << "Clabel"; } virtual int getSize(void) const {return 0;} virtual void dump(ostream& out) const; virtual pasType getPtype(void) const {return tOTHER;} virtual int getLower(void) const {return 0;} virtual int getUpper(void) const {return 0;} virtual int classCode(void) const {return CLABEL;} };

This class carries two parameters, referenced and defined. These indicate the state of a particular label. In Pascal, a goto label marker must first be declared, like this (label markers are integers in Pascal).
LABEL 15;

The program can then make use of "15" as a statement label, like this:
15: k := 255;

or in a goto statement, like this:


goto 15;

In general, there can be many goto statements, but only one statement label for each such label. Also, a label should only be declared once, and there should be at least one goto statement using it. The Boolean referenced means the label is referenced in at least one goto statement. The Boolean defined means the label has appeared as a statement label. Each label is entered in the symbol table by prefixing the number with a $ character, i.e. $15. It can therefore be found later and its state checked or changed. Here's how all that works: on the label N statement, the label $N is pushed into the symbol table. If it's already there, we have an error. on a labelled statement, i.e. N: stmt; we check the defined flag. If true, we have an error (it's previously been used as a statement label). In any case, we set it to true. If $N isn't in the symbol table, this is another kind of error. on a goto statement, i.e. goto N; we set referenced. It doesn't matter whether this was previously set or not. If $N isn't in the symbol table, this is another kind of error. at the end of the scope of the label, the symbol table is scanned for label appearances. These are easily found by looking for those symbols linked (through the typep pointer) to a Clabel type object. For each such label, both Booleans should be true, which means that a statement label as appeared, and also that at least one goto has appeared.

NIL Type
The Pascal keyword NIL is a pointer type referring to "invalid pointer". Any pointer can be assigned NIL, in which case the pointer cannot be used to refer to any data. On most machines (and the 80386 used for compilation), value 0 is equivalent to NIL. NIL can legally be used with a pointer of any type. Because of this universal property, we've set up a special Ctype object CnilType that stands for the NIL type. It carries no value and has a minimal set of member functions.

Variables
A Pascal variable is declared through a syntax that's very similar to that used for type declarations. A variable requires memory space at runtime, enough to support the size of its associated type. Like a type

Chapter 12: Type Declarations and Type Checking, page 357

declaration, each variable must be located through a Csymbol symbol table lookup, and linked to some type. Figure 5 below shows three var declarations. The first one,
var vx: real;

declares one real variable. Each variable requires a Csymbol object to carry its name, which is also linked into the current symbol table. The only difference between a var and a type declaration is skind, which is set to sVARIABLE for vars. (Variables appearing as function formal parameters will have skind set to sVALPARM or sVARPARM instead). As usual, each variable also requires a pointer to a Ctype derived class. For this declaration, the name "real" points to box D in figure 2, so that's what this typep will also point to. The next var declaration is
var vaa: sra;
var vx: real; AF Csymbol skind= sVARIABLE "vx" typep D Csimple ptype= tREAL

vaa: sra; AG Csymbol skind= sVARIABLE "vaa" typep Z CarrayType lower= 8 upper= 47 etc. AJ AI CarrayType lower= 0 upper= 255 indexType elmtType H Csimple ptype= tCHAR Csubrange vlower= 0 vupper= 255

vax: array [0..255] of char; AH Csymbol skind= sVARIABLE "vax" typep

Fig. 5. Example Csymbol and Ctype objects for some VAR declarations
Here, the name sra must be in the symbol table. It's type is box Z, in figure 3, so that's what we use for the typep pointer in box AG. The third var declaration is
var vax: array [0..255] of char;

This shows that a more complicated type declaration can be used to declare a var. However, that's usually a bad idea, since Pascal strong typing can sometimes cause problems when you try to use this

Chapter 12: Type Declarations and Type Checking, page 358

variable with others. Types are compared for equality in Pascal by comparing the type pointers, rather than comparing the structure of the type. As we'll see, a new type object must be created for this declaration, and it can't get pointed to by any other var or type. This declaration requires the Csymbol object, box AH, to carry the var name "vax". Its type is a new object CarrayType, box AI. Its index type is a Csubrange, box AJ, constructed as described above especially for this variable. Its element type is char, which is box H in figure 2.

How a Compiler Uses Types


As we explained at the beginning of this chapter, the compiler makes use of type declarations in all of these ways: to determine if the variables of some operator have a valid type for the operator. For example, + is valid for two string types, but * is not, to determine the resulting type of some operation, to determine how to cast certain variable types for the purpose of an assignment statement or to pass a parameter by value to a function, and to work out the memory space required for a variable. This depends on the variable's type. These uses in fact are the reason we need to carry so much information about types and variables during the compilation, and why the identifiers need to be carried in a symbol table, and associated with a type. To see how all this operats, let's make up a simple program based on the types given in figures 2-5 in this chapter. Recall that the structures in figure 2 are created by the compiler, and don't depend on any source declarations. Figures 3-5 do depend on source declarations:
program types; type ent= (red, blue, green); sr= 8..47; src= sr; sra= array[sr] of real; rec1= record x: integer; y: real; z: sra; end; var vx: real; vaa: sra; vax: array [0..255] of char; ex: ent; begin vx := 56.7; {assignment A} vaa[22] := 35E7; {assignment B} vax[32] := 'z'; {assignment C} ex := green; {assignment D} end.

Suppose that the compiler has parsed all the type and var declarations. The result is a set of class objects in memory linked to names in the symbol table, as shown in figures 2-5. Let's discuss what the compiler needs to work out in assignment statement A: The right-member of the assignment (56.7) is clearly type real, or box D. This is established by the form of the constant: 56.7 is clearly a type real. Chapter 12: Type Declarations and Type Checking, page 359

The left member (vx) is declared real. This is established by looking up "vx" in the symbol table, finding box AF, whose type is box D. The assignment is legal since both the right and left side point to the same type box, D. No conversions are needed, and the assignment operation must be done by copying a 32-bit double word.

Now let's look at assignment B: The name vaa is looked up in the symbol table. The result is box AG, and its type is box Z, an array type. Any array type variable should be followed by a bracketed expression for its index. This one is. The index must resolve to a subrange type, between 8 and 47 inclusive. That's clearly the case, since 22 is type integer, and lies between these limits. Determining that the index is in bounds at compile time can only be done when the index is a constant. Otherwise, bounds checking requires some additional target code to check the bounds at runtime. The element type of an indexed vaa is a Csimple type REAL (see box D, figure 4). The right side of this assignment is type REAL, clearly. (It could also be type INTEGER and still be compatible, since an integer can be upgraded to a real for the sake of an assignment). So the assignment operation is legal. Let's look at assignment C: The name vax is looked up. The result is box AH, and its type is box AI, also an array type. The index type is box AJ, a subrange between 0 and 255. The index in the assignment statement is clearly compatible with this and it is also in bounds. The element type of vax[32] is type char, box H, and that's identical to the type that would be assigned to the character 'z' on the right side of the assignment. So this assignment is legal. Finally, assignment D: The name ex is looked up in the symbol table. This doesn't appear in any of the figures, but points to the type box N in figure 2. This is an enumerated type. The name green when looked up in the symbol table yields box R in figure 2. This is a constant, whose type is box U. Since U is a CenumItem, the compiler is aware that its parent pointer indicates the type of this constant, box N. This is identical to the type of ex, so the assignment is legal. Also, green is associated with the constant value 2, so assembler code to copy a 2 into the variable space for ex can be generated.

References
[1] J. W. Backus, The FORTRAN Automatic Coding System, in Proc. Western Joint Computer Conference, vol 11, 188/198. [2] Basic. See any Basic manual or textbook. [3] P. Naur, Revised Report on the Algorithmic Language ALGOL 60", CACM 6(1) 1/17. [4] H. Bauer, S. Becker, S. L. Graham, ALGOL W Implementation, CS98, Stanford University Computer Science Department. [5] E. T. Irons, A Syntax Directed Compiler for ALGOL 60", CACM 4(42) 51/55.

Chapter 12: Type Declarations and Type Checking, page 360

[6] Perl [7] N. K. Wirth, The Programming Language PASCAL, Acta Informatica 1(1) 35/63, 1971 [8] Brian W. Kernighan, Dennis M. Ritchie, The C Programming Language, Prentice-Hall, 1978 [9] N. K. Wirth, Pascal User Manual and Report, Spring-Verlag, 1978. [10] A. V. Aho, J. D. Ullman, The Theory of Parsing, Translation and Compiling, two volumes, Prentice-Hall, 1972

Chapter 12: Type Declarations and Type Checking, page 361

Chapter 13: Functions and Procedures in Pascal


W. A. Barrett, San Jose State University nch13.doc, vs. 2.1

Introduction
The Intel Pentium provides a call instruction that, when executed, pushes a return address in the runtime stack, then transfers control to its code location argument. The ret instruction is designed to "return" from a call. At runtime, it expects to find the return address at the offset ESP in the runtime stack. It pops the address, then transfers control to that address. The runtime stack has many other uses at runtime, as follows: to carry temporary results of arithmetic calculations. to carry the formal parameters in a function or procedure call. to carry any local variables of a function or procedure during a call invocation. to carry miscellaneous addresses needed to support language features, i.e. the return address, dynamic link and static link. to carry the temporary return value of a function. This stack space is needed in Pascal as we'll see. Most of this data will be accessed indirectly through a special register, EBP, which uses the stack segment as its base. This chapter will further develop a commonly used plan for supporting high-level function and procedure calls, using Pascal as the source language. The methods developed here carry over into other languages, including object-oriented languages. They are also the basis of function calling methods used in other processors. As usual, we'll use the Pentium protected mode instruction set in what follows, with 32-bit addresses. For example, a function call will push a 4-byte address in the stack. Integers and reals will be double words. Specific examples of procedure call assembly source code may be generated with the compiler in directory pascal5, which follows these conventions.

Functions and Procedures


A procedure in Pascal is called for its side effects. It does not return a value. A procedure can only be called as a separate statement in Pascal, like this:
proc(5, true);

This procedure takes two parameters then performs some operation. It may call other functions, and it may access or change other variable values. It might write or read a file or a database. Ultimately, it is expected to return, but there is no return value. After its operation, control passes to the following statement. This procedure might be declared like this:
procedure proc(p1: integer; b1: boolean); begin { execution code } end;

A function in Pascal may produce side effects, but it is expected to return some value. Functions can be called from within expressions, like this:

Chapter 13: Functions and Procedures in Pascal, page 362

a:= 15*fcn(5, true);

Here, the function fcn takes two parameters, an integer and a Boolean. It may perform a simple or complicated operation. It may call other functions, and it may access or change other variable values as a side effect. Ultimately, it is expected to complete its operations and return some value, which will then be multiplied by 15. The product will then be assigned to variable a. This function might be declared like this:
function fcn(p1: integer; b1: boolean) : integer; begin { execution code } fcn := p1*5; { setting the return value } end;

The return value of a function is set through an assignment statement as shown in the above example, by using the function name on the left side of the assignment, like this:
fcn := p1*5; { setting the return value }

This sets the return value, which will be returned to the calling environment through a mechanism that we'll discuss later. The return value may be returned through one of the registers, or through the floating point unit, or by leaving it on the runtime stack. Just how that is done is a compiler feature and not a language feature. The compiler designer can choose any of a number of possible implementations and will usually pick the one with the highest possible runtime performance. Notice that the special keywords procedure and function distinguish these two Pascal features. Yet they have many features in common. Both can accept parameters, which are declared the same way. Both have a block of body code (the material between begin and end. Both can affect other variables as a side effect. The only difference is that a function returns a typed value and must be called as an expression, while a procedure is called as a statement.

Recursive Functions
Functions and procedures are expected to operate correctly if called recursively, i.e. calling themselves. Recursive calls require that the return address be pushed onto the runtime stack as well as all local variables and formal parameters, since two different calls will have different values for these in general. We don't want a later call to wipe out information set up in an earlier invocation that isn't done yet. We'll use a uniform mechanism that supports recursion, even though many functions and procedures are not recursive. It's sometimes possible for a compiler to figure out which functions are recursive and which aren't. A non-recursive function can be set up and called more cheaply at runtime than a recursive one, so this is a useful attribute for a compiler to look for. Since a Pascal program is completely contained in one file, the compiler can work out a function/procedure call graph and discover which are called recursively. In C/C++, a function must be presumed to be recursive unless it has the static attribute and a graph analysis can show that it cant call itself. Non-static functions may have calls in other files, so they must assumed recursive for want of better information.

C Functions
It's useful to compare Pascal to C in this regard. In C, there are only functions. The C language permits a function to be called as a statement or within an expression, if it returns some type other than void. By default all C functions return an int. If you need a function that doesn't return a value, then it must be declared as a void type, like this:

Chapter 13: Functions and Procedures in Pascal, page 363

void fcn(int k, BOOL b) {

...

Whether a C function returns a value or not, it can be called by itself in a statement form. If it returns a value and is called as a statement, the return value is just discarded. A void function must be called as a statement; it can't be used in an expression. The return value in C is returned through the return keyword, by treating it as a function, i.e.
return(value);

This means that a C function doesn't need a temporary memory space to carry the return value, and that it tends to be somewhat more efficient than an equivalent non-optimized Pascal function for that reason.

Actual and Formal Parameters


Suppose we have a function declaration like this:
function fcn(p1: integer; b1: boolean) : integer; begin { execution code } fcn := p1*5; { setting the return value } end;

Then somewhere else we might have a function call like this:


k := fcn(i+15, b AND c);

We say that the variables p1 and b1 in the function declaration are the formal parameters of the function. In the function call, the expressions i+15 and b AND c are the actual parameters. It should be clear that the value of i+15 will be associated with variable p1 inside the function body. Also, the value of b AND c will be associated with variable b1 inside. Here's what will happen in the call: each actual parameter is evaluated. These can often be arbitrary arithmetic expressions, yielding some value to be passed to the function. the value of each parameter will be pushed onto the runtime stack. After the function call is made, these values can be accessed through stack-relative addresses. These values essentially take on new names, the formal parameter names. In our example, the expression i+15 is evaluated. Let's say its value is 225. Then the integer 225 is pushed on the runtime stack. Expression b AND c is next evaluated, and its Boolean result is pushed on the stack. Let's say its value is false. The function is then called. These values, 225 and false, will be associated with the formal parameters p1 and b1, respectively, and can be accessed at runtime by referencing certain locations in the runtime stack. Note that the fact that these came from some expression isn't known inside the function. Other function calls may involve other expression forms. The only information about these that survives the function call are the values resulting from evaluation of the expressions. This is an example of passing parameters by value, which is the Pascal default the actual parameter is evaluated as an expression, yielding some value that can be copied to the runtime stack.

Passing by Value and by Reference


Pascal also supports passing by reference, in which the address of the actual parameter is passed instead of its current value. Clearly, the actual parameter has to be something that can be addressed, and not some arbitrary expression. The address must be worked out just before the call, and pushed onto the stack, through instructions like these two:

Chapter 13: Functions and Procedures in Pascal, page 364

lea eax,parm push eax

; get the offset of parm ; push it

The function can then both access and set that parameter through its address. Here is an example of a simple program containing a function with a parameter k passed by reference:
program mine; var m, n: integer; { global variables }

function rref(var k: integer; b: boolean): integer; var p: array [0..2] of float; begin p[1]:= 23; k:= p[1]; rref:= 5; end; begin { main program } m:= 100; writeln(m); { prints 100 } n:= rref(m, false)-5; writeln(m, n); { prints 23, 0 } end.

The variable m is passed by reference to function rref in the call


n:= rref(m, false)-5;

Inside the function, variable m is known through the formal parameter k. We pushed the address of m on the stack before the call, and inside the function, we can refer to that stack location as the formal parameter k, which points to the variable m. T integer that k points to is set to 23 inside the function body. So after the return, we find that variable m now has the value 23. The value 5 is returned from the function, and n receives 5-5 = 0.

Stack Frame
A stack frame is a contiguous section of the runtime stack that is allocated for each function invocation. Each time a function is called at run-time, one stack frame is created. It's partially created just before the call, and completed by the first instructions of the function code. A stack frame contains most or all of these items: a return value for a function (if this is a function, not a procedure), the functions actual parameters, if any (they will be addressed as the formal parameters inside the function), the return address, a saved copy of register EBP, a saved copy of the current static link, all local variables declared inside the procedure or function, if any. A typical stack frame is given in Figure 1. Increasing addresses are up, and the stack top is at the bottom of the figure. This stack frame is for function rref, which looks like this:
function rref(var k: integer; b: boolean): integer; var p: array [0..2] of real; begin ... end;

Chapter 13: Functions and Procedures in Pascal, page 365

It applies just before the function's body code is executed. rref has two formal parameters, an integer by reference (k) and a Boolean variable by value (b). It returns an integer (4 bytes), for which space in the stack frame must be allocated (the return value). The parameters require space for the pointer, k (4 bytes), and the boolean value b (2 bytes). These are pushed into the stack before the call instruction is executed. The call pushes a return address on the stack (4 bytes). Just inside the function, the first instructions in the push the previous static link, a pointer (4 bytes), then the previous EBP register value. We will discuss the purpose of the static link later in this chapter. EBP must be saved, since we will reset it in this function, and use it to refer to the stack variables. The function has one local variable p, an array of three 4-byte floating-point numbers, or 12 bytes total. So space is allocated for it. These entries are in the order in which the items are pushed in the stack. Recall that a Pentium push operation instruction decreases the memory address, so the addresses decrease as items are pushed in the stack. Also the current stack pointer points to the least address of whatever was last pushed. Address [ebp+18] [ebp+14] [ebp+12] [ebp+8] [ebp+4] [ebp] [ebp-12] Item return value pointer to k Boolean b return address previous static link previous EBP variable p Size 4 bytes 4 bytes 2 bytes 4 bytes 4 bytes 4 bytes 12 bytes

Figure 1. Stack frame for a function call Just after the previous EBP is pushed in the stack, register ESP is copied to EBP. Recall that both ESP and EBP can be used as memory offsets, and that their default segment is the stack segment. We will use EBP to refer to positions in the current stack frame. By copying ESP to EBP just after pushing the previous EBP, register EBP will point to that cell in the stack frame. Address [EBP] therefore points to the previous EBP. Address [EBP+4] points to the previous static link, [EBP+8] to the return address, [EBP+12] to Boolean b, [EBP+14] to the pointer to k, and [EBP+18] to the return value. Since the local variables are allocated after EBP is fixed, they will have negative offsets. Thus [EBP12] points to the base of the array p. Just after the stack frame is set up, the stack pointer ESP will point to the bottom-most entry in the stack, the "variable p" word. The stack can be used for other operations, such as more function calls, expression evaluation, etc., by doing more pushes (and matching pops). Of course, these values must not be corrupted by (for example), popping the stack more than it's been pushed. For example, this innocent assignment
p[3] := 1;

will set the previous EBP to 1, overriding the value saved there. This will guarantee a crash after the function returns. Guaranteeing the stack integrity is a major responsibility of any compiler.

Chapter 13: Functions and Procedures in Pascal, page 366

Its possible for the compiler to keep track of what ESP is pointing to. A reasonable optimization is therefore to not save EBP, or use it for this purpose. Instead, the compiler can access variables by offsets from ESP. For example, if nothing were pushed on the stack, the address of the Boolean b would be [ESP+24]. If another double word were pushed into the stack, that address would change to [ESP+28]. This strategy saves a push and a pop in each function call. However, most symbolic debuggers depend on EBP to locate local variables and to provide a function call backtrace. So applying this optimization means that the program cannot be debugged at the source level. You can always debug a program at the assembler level, though that is obviously much more tedious.

Calling a Function
Suppose that function rref is called like this:
n:= rref(m, false)-5;

The assembler for the call will then look like this:
sub lea push mov push call esp,4 eax,m eax ax,0 ax rref ; ; ; ; ; ; allocates return value space form a pointer to variable m push the pointer get the Boolean false associated with b push the Boolean call the function

Notice that we start by decrementing ESP by 4. This allocates space in the stack for the return value. Each parameter in the call is then evaluated and pushed. In this case, the address of m is formed by the lea instruction and pushed in the stack. Next, the boolean false (a 0) is pushed. Finally, call is executed, which pushes a return address in the stack. Just after the call, the stack frame will look like the following. Register EBP contains something else, so we won't show it. The addresses relative to EBP also wont get fixed until EBP is set. Address Item Size return value 4 bytes pointer to k 4 bytes Boolean b 2 bytes return address 4 bytes

Inside the Function Setup Code


The first instructions executed when in the function rref are these:
push push mov mov sub STLINK+n ; ebp ; ebp,esp ; STLINK+n,ebp esp,12 ; save static link save the previous EBP register set EBP to the current stack frame ; set static link allocate local variable space, for variable p

Well discuss the static link later in this chapter. As to the push EBP, this is needed to save the previous value of EBP. We will set it to something new in the next instruction. The mov EBP,ESP copies register ESP to EBP. Note that at this moment, ESP points to the previous EBP register. By copying this to EBP, we can start using EBP as a pointer into the stack, rather than ESP. (ESP will move up and down as we make more function calls. Although we could access stack frame elements with ESP offsets, most debugging tools depend on EBP to locate function Chapter 13: Functions and Procedures in Pascal, page 367

call frames. So ESP would be suitable for production code, while EBP is needed for symbolic debugging. The static link STLINK+n is set in the next instruction. n will be the nesting level of this function. The sub esp,12 allocates space for all the local variables. The separate variables will have offsets from EBP that the compiler can work out. If any of these require initialization, that can be done later through explicit instructions using EBP offsets. At this moment, the stack frame is complete and is shown in figure 1 above.

Function Execution
After these initial setup instructions, the function body is executed. Weve seen how assignment statements can be represented in assembler. The runtime stack is sometimes used to hold a temporary value while evaluating expressions, but the compiler must be careful to always pop off exactly the same number of bytes as were pushed. Thus, after each statement, the stack should again look like figure 1, with ESP pointing to the base of array p. All the local parameters, formal parameters and return value can be accessed through EBP offsets, like this:
mov ax,[EBP+12]

which fetches the Boolean b into register AX. EBP will remain fixed throughout the function body execution, except when another function F is called within the body. F will allocate another stack frame and set EBP to a new value. But since F also saves the existing EBP, and restores it just before returning (read on), EBP will effectively remain fixed throughout the current function body. All this obviously depends on maintaining this discipline through ALL the programs code.

Function Return Code


When we are ready to return from this function, the following instructions are executed:
mov pop pop ret esp,ebp ebp STLINK+n 6 ; ; ; ; clear away local variables restore EBP restore static link return, removing the formal parameters

The first instruction,


mov esp,ebp ; clear away local variables

deserves a bit of explanation. By copying EBP to ESP, we effectively set the stack pointer to the location of the previous EBP (see figure 1). This has the effect of eliminating the local variable allocation. It does nothing to the values carried by the local variables, but it has made them outside the protected domain of the stack. This action also makes it easy to restore EBP and the previous static link through two pop instructions, as shown. After the two pop instructions, ESP points to the return address, which is necessary for the ret instruction. A ret 6 will not only return from the function, but it will also increase ESP by 6, the bytes used for the formal parameters. ESP will then point to the return value. The stack frame at this moment will look like this: Address Item Size return value 4 bytes We've left the return value on the stack top, under the assumption that this is how our function returns Chapter 13: Functions and Procedures in Pascal, page 368

an integer value. That's one possibility. In fact, our pascal compiler leaves an integer return value in register EAX. So it should generate the following code instead:
mov mov pop pop ret eax,[ebp+18] esp,ebp ; ebp ; STLINK+n ; 10 ; ; copy return value to AX clear away local variables restore BP restore static link return, removing formals and return value

This return sequence will also remove the return value from the stack; it's now in register EAX. In general, where the return value ends up depends on how the compiler treats the function call within an expression. The pascal compiler strategy is to leave expression values in EAX if they are integer, char or Boolean, leave a pointer in EAX for pointer return values, and to leave a floating value in the FPU stack for a float return. Only one kind of object is returned on the stack--a record object returned by value.

After Returning from the Function


By following this story carefully, you should see that the ret instruction has transferred control back to the instruction just following the call. Recall that we are examining the assembler for the Pascal statement
n:= rref(m, false)-5;

The assembler for this whole statement now looks like this:
sub lea push mov push call sub mov esp,4 eax,m eax ax,0 ax rref eax,5 n,eax ; ; ; ; ; ; ; ; allocate return value space form a pointer to variable m push the pointer get the Boolean false associated with b push the Boolean call the function subtract 5 from the return value in EAX complete the assignment

The Function Body


The function rref must appear somewhere else in the Masm assembler file. It should have a proc header and an endp trailer. So the whole function body will look like this, assuming that we return the function value in register EAX:
rref proc near push STLINK+n ; save static link push ebp ; save the previous BP register mov ebp,esp ; set BP to the current stack frame mov STLINK+n,ebp ; set static link sub esp,12 ; allocate local variable space, for variable p ; ; code for the statements in the function goes here ; one of the statements should set the return value [ebp+18], e.g. ; rref:= 22; ; which is implemented by this instruction: mov dword ptr [ebp+18],22 ; ; eventually, the function is supposed to return... mov eax,[ebp+18] ; copy return value to EAX mov esp,ebp ; clear away local variables pop ebp ; restore BP pop STLINK+n ; restore static link ret 10 ; return, removing formals and return value

Chapter 13: Functions and Procedures in Pascal, page 369

rref endp

Nonrecursive Function Calls


Nonrecursive function calls dont require a stack frame, since at most there can be only a single invocation of the function. Its formal parameters, return value and local variables can be allocated once and for all, in global data space. The use of global data space saves a few instruction cycles, since a global data item can be accessed directly through an address embedded in the instruction. Note that stack variables must be accessed indirectly through EBP or ESP, with some offset. The performance improvement may not be noticeable, however, since a register indirect access to memory is about as fast a direct access. Also, a push into the stack and a mov to data space is about the same in cycles. About the only advantage of this strategy might be in a real-time memory-limited application. The software for such applications are usually written to avoid the use of recursion, since the extent of the recursion is typically unknown, and therefore the maximum stack space required unknown.

Nested Functions
We need to examine another issue that is unique to Pascal, Ada and Modula. That is the issue of nested functions and uplevel references. This issue doesn't arise in C/C++, since these languages don't support nested function declarations. Here is a Pascal program that contains two nested functions (func2, func3) and two nested procedures (proc2, proc3):
PROGRAM main1; VAR i: integer; r: real; PROCEDURE proc2(p2: integer); forward; FUNCTION func2(f2: integer) : integer; VAR r: real; FUNCTION func3(f3: integer) : real; BEGIN {func3} proc2(6); {indirect recursion} if (f3 = 15) then i:=1; { reference to f3 } if (f2 = 3) then i:=2; { reference to f2 in func2 } if (i = 4) then i:=3; { reference to global i } END {func3} ; BEGIN {func2} r:= func3(5); END {func2}; PROCEDURE proc2(p2: integer); PROCEDURE proc3(p3: integer); BEGIN {proc3} proc3(4); {direct recursion} r:= func2(3); END {proc3}; BEGIN {proc2}

Chapter 13: Functions and Procedures in Pascal, page 370

proc3(2); r:= func2(1); END {proc2} ; BEGIN {main1} proc2(0); END {main1} .

(This program is nonsense and won't execute properly. Certain of the recursive calls never terminate.). We say that the function func3 is nested inside procedure func2. There's a procedure header for func2, like this:
FUNCTION func2(f2: integer) : integer;

Its executable code starts several lines later, and looks like this:
BEGIN {func2} r:= func3(5); END {func2};

Between these two, we could introduce some const, type and var declarations, such as the line
VAR r: real;

Pascal also permits inserting some procedure or function declarations, and these are said to be nested inside the covering procedure. Here's procedure func2 without the nested variable and function declaration:
FUNCTION func2(f2: integer) : integer; BEGIN {func2} r:= func3(5); END {func2};

And here it is with the nested variable r and the function func3:
FUNCTION func2(f2: integer) : integer; VAR r: real; FUNCTION func3(f3: integer) : real; BEGIN {func3} proc2(6); {indirect recursion} if (f3 = 15) then i:=1; { reference to f3 } if (f2 = 3) then i:=2; { reference to f2 in func2 } if (i = 4) then i:=3; { reference to global i } END {func3} ; BEGIN {func2} r:= func3(5); END {func2};

What this Means


When one function or procedure, like func3 above, is nested inside another one, like func2, it can only be called from within the scope of its enclosing procedure, func2. In the example above, the call func3(5) is OK, because it lies within the scope of func2. However, a call of func3 outside that scope would not be recognized by the compiler. For example, consider the body of the program

Chapter 13: Functions and Procedures in Pascal, page 371

BEGIN {main1} proc2(0); END {main1} .

which appears near the end of the file. A call of func3 inside these BEGIN - END pairs would be illegal. Similarly, a call of func3 somewhere above the func2 header cannot be recognized. What this nesting scheme provides is a way of hiding certain functions and procedures from other material in a program. It's somewhat like declaring a member function of a class; only within the context of an object of that class can the member function be used. You can therefore write or use several different functions, all with the same name, provided that they appear inside separate functions. This feature was introduced in Algol 60 as a way of supporting code reuse, and was carried into Pascal by its author Niklaus Wirth. Notice that the C language doesn't support nested functions, except in a very limited way. You can use the same name for different functions, but they have to appear in different files, and also carry the attribute static. The static attribute causes their names to be local to the file; they won't survive to the linker. Static functions and variables must also be referenced exclusively within their file, and can't be referenced in some other file. Of course, the C++ class mechanism is superior to this nested function concept.

Chapter 13: Functions and Procedures in Pascal, page 372

Dynamic links

main1 proc2

Static links

main1 proc2 proc3

main1 proc2 proc3 proc3

a. main1 > proc2 b. main1>proc2>proc3

c. main1>proc2>proc3>proc3

main1 proc2 proc3 func2

main1 proc2 proc3 func2

main1 proc2 proc3 func2

func3 d. main1>proc2>proc3 func3 >func2 proc2 e. main1>proc2>proc3 >func2>func3 f. main1>proc2>proc3> func2>func3>proc2


Figure 1. How the runtime stack maintains stack frames, dynamic links (left) and static links (right) with nested procedures and functions. This follows program main1 above.

Chapter 13: Functions and Procedures in Pascal, page 373

Upreferenced Variables
Let's take another look at the functions func2 and func3, reprinted below:
FUNCTION func2(f2: integer) : integer; VAR r: real; FUNCTION func3(f3: integer) : real; BEGIN {func3} proc2(6); {indirect recursion} if (f3 = 15) then i:=1; { reference to f3 } if (f2 = 3) then i:=2; { reference to f2 in func2 } if (i = 4) then i:=3; { reference to global i } END {func3} ; BEGIN {func2} r:= func3(5); END {func2};

We can see how function func3 can be called from within the body of func2. We also see how the formal parameter f3 associated with function func3 can be referenced. But look at the next line,
if (f2 = 3) then i:=2; { reference to f2 in func2 }

Here, the formal parameter f2 associated with function func2 is being referenced. This is acceptable by the Pascal scoping rules. Any nested function should be able to access variables and functions found at an enclosing scope level, including any global variables. But this introduces certain problems when we consider how to compile these statements into assembler. Recall that the variables declared inside some function don't exist until the function is called. They are allocated from the runtime stack as part of the function's stack frame, and they are assigned addresses that are relative to the stack frame. Each such local variable has an EBP-relative address of the form [EBP-22], where the -22 represents the byte offset from the origin of the stack frame, and register EBP carries the stack frame origin. EBP depends on just how and when the function was called at runtime. Similarly, each formal parameter variable has an address of the form [EBP+8]. In other words, the compiler cannot assign fixed addresses to such variables, it must assign EBPoffset addresses, and it can only work out the offset part, not the absolute address. Now functions can be called in many different ways. There's no guarantee, for example, that just because func3 is nested inside of func2 that the variables associated with func3 will be in the immediately preceding stack frame. In general, the stack frame for func2 could at be a large distance, memory-wise, from the stack frame of func3. Look at Figure 1, which shows six different snapshots of the runtime stack for the above program. In (a), the main1 body has been called, and then inside it, proc2 has been called. This causes two stack frames to be placed in the runtime stack. The current stackframe (at the bottom of the diagram, which is the "top of stack") is marked by register EBP, and is of course the stack frame for the procedure currently executing, proc2. The arrows on the left side of each snapshot show the dynamic links. These always point to the immediately preceding stack frame. A dynamic link is created on the stack by pushing register EBP very shortly after entering the body of a function. At that moment, EBP contains a pointer to the previous stack frame, that belonging to the function that made the call. EBP is then set to mark the current function's stack frame. As more functions are called, a chain of dynamic links is set up in the stack as shown by the figures. The deepest call sequence is in figure (f), where proc2 has a dynamic link pointing back to func3, which points back to func2, etc. The assembler code executed first upon entering a function or procedure forms the dynamic link with these instructions:
push ebp

Chapter 13: Functions and Procedures in Pascal, page 374

mov

ebp,esp

The push saves the existing EBP on the runtime stack. The mov then sets EBP to the current stacktop offset, ESP. We need the dynamic link in order to restore register EBP just before returning from the function. You'll find the assembler instruction
pop ebp

before the procedure exits.

The Static Link


Figure 1(e) illustrates the difficulty we face in trying to locate the global variable i from within function func3. Two such references are in the line
if (i = 4) then i:=3; { reference to global i }

found in the body of func3. Variable i is not a local variable of func3, nor is it a local variable of func2. As a global variable, it is part of the stack frame associated with main1. Figure 1(e) shows the five stack frames formed when main1 calls proc2 which calls proc3 which calls func2 which calls func3. We see that the func3 stack frame is separated from the main1 stack frame by several other intermediate stack frames. The general problem is, where is the main1 stack frame relative to the func2 stack frame? For that matter, notice that the stackframe for func2 is just beneath the stackframe for proc3, yet func2 isn't nested in proc3; it's nested in main1. The difficulty we face arises because the sequence of dynamic calls of functions doesn't necessarily follow the static nesting of the functions. Yet we need to provide a way of referencing the variables found in a (statically) enclosing environment, i.e. we want to be able to access the variables of main1 from within func3. We therefore need a static link of the sort shown on the right side of the stackframes in Figure 1. Look at Figure 1(e) again. The static link for func2 points to main1 as it should, since func2 is nested just inside main1. The static link for proc3 points to proc2 since proc3 is nested in proc2. Similarly, the static link for proc2 points to main1. And we clearly want the static link for func3 to point to the last func2 stack frame set up on the stack. Figure 1(f) shows six stack frames resulting from several function calls. Notice that a stack frame for proc2 appears twice in the diagram: main1 called proc2, which called proc3 which called func2 which called func3 which called proc2 a second time. Both of the proc2 static links point to main1 as they should. Each dynamic invocation deserves its own stack frame. The static link for function "me" needs to point to "my" statically nested function, and that's not necessarily the function that called "me".

Implementing Static Links


An efficient way of managing and using static links is through a static link array. In the Qparser Pascal compiler system, this is an array of 32-bit double words carried at run time. Each double word will carry an offset into the stack segment, much like the EBP and ESP registers do. The array index (0, 1, 2, ...) is essentially the static nesting level of an associated function or procedure. This array is called STLINK, and is in file aservice.asm, which is included with every assembler file generated by the compiler. Here's its declaration:
STLINK dd 20 DUP(0) ; carries static links as indexed array

One double word is allocated for each possible nesting level. The nesting level of the main program is 0, the level of each function or procedure nested in main is 1, the level of each function inside a level1 function is 2, etc. In the example program at the beginning of this chapter, proc2 and func2 are at level 1, proc3 and func3 are at level 2, and so forth. We've allocated 20 double words to STLINK, Chapter 13: Functions and Procedures in Pascal, page 375

which will support a maximum nesting depth of 20. The static link array STLINK is used this way at runtime: Just after entering any function, we save the current STLINK[N] value, where N is the nesting level, like this:
push STLINK+N*4

Just after EBP is fixed upon entering any function, the value of EBP (a 32-bit word) is stuffed into the address STLINK+N*4, where N is the nesting level of the function. This can be done with a single instruction, like this:
mov STLINK+12,EBP

Here, the level is 3, so we want the new value of EBP to be stuffed into STLINK+12. (The 12 arises because we need a byte address, and a double word is 4 bytes). Any reference to a variable will in general have the form [EBP+n] or [EBP-n]. However, EBP must contain a valid pointer to the appropriate stack frame. If the variable is in the current stack frame, we can just use this address form as is. However, if the variable is in a different stack frame, because it's at a different nesting level, then we need to adjust EBP before using the reference, then restore it later. The compiler will always know the nesting level of its variables, so it's easy to generate EBP adjustment code. For example, suppose we are at nesting level 5 and we need a reference to an integer variable V whose EBP offset is -6, which resides at nesting level 3. The compiler will then issue these instructions:
mov mov mov EBP,STLINK+12 EAX,[EBP-6] EBP,STLINK+20 ; adjust EBP to uplevel stack frame ; access the variable ; restore EBP to the current stack frame

The first mov sets EBP to point to the stack frame at nesting level 3 by using the STLINK array. (The current level is 5). The value in STLINK+12 was set upon entering that function. We know that it had to be entered, since our current level is 5, and you can't get to level 5 without first calling functions that go through levels 1, 2, 3 and 4). The second mov performs the variable reference. This may or may not refer to an integer variable; it depends on the variable's type, clearly. The third mov resets EBP to the current stack frame, at nesting level 5. Nothing else need be done to support static up-level references. Before exiting from a function, EBP is of course reset to its previous state through the dynamic link saved on the stack; this is the purpose of the instruction
pop ebp

found near the exit. Also, STLINK[N] must be reset from its saved value in the stack. Since it was pushed first, its popped last, like this:
pop STLINK+N*4

Of course, N is the functions nesting level.

Global Variables
Global variables can (in many architectures, including the 80x86) be accessed directly rather than through a EBP offset. They reside at level 0, and the compiler can notice that. Global variables are allocated at the low-address end of the data segment, rather than in the stack. They can then be accessed through the data segment register DS instead of through a stack offset EBP. This is clearly an optimization, since fewer instruction cycles are needed for such an access. This has been done in the Qparser pascal compiler. To see how this is arranged in assembler, look at the assembly code generated for the global variable i in t18.pas, reprinted below:

Chapter 13: Functions and Procedures in Pascal, page 376

; 2: ; 3:

VAR i: integer;

The .DATA directive causes the following material to be placed in the data segment, rather than the code segment
.DATA

So variable i (which is renamed I_051 in the assembler) is assigned one double word (32 bits)
I_051 ; 4: R_052 dd 0

And the variable r (renamed R_052) is assigned a double word (32 bits)
r: real; dd 0.0

The compiler knows that globals are at level 0 and suppresses any EBP adjustment code for these variables. Also, any reference to a global variable just uses the variable name, i.e.
mov eax,I_051

Since I_051 is a global, this instruction carries a direct reference to the memory offset from the DS segment register. If I_051 were in some stack frame, this name is equated to something like [EBP+14], and the mov instruction receives that form instead, i.e.
mov eax,[ebp+14]

Summary
The purpose of the entry and exit code generated for a typical function should now be clear. You can generate all the assembler code yourself with the compiler pascal in pascal5. This contains the Pascal program listed at the beginning of this chapter. In the following, lines starting with a semicolon are source Pascal code.

Calling a Function
Here's a typical function call, and the assembly code that supports it. This is found in the assembler code generated from file pasprogs/t18.pas. We've added comments to make the instructions clear. func3 returns a real type, and has an integer parameter by value (f2). Temporary space for the return value is needed, but that value is actually returned in the floating-point unit (FPU).
; 21: sub push r:= func3(f2); esp,4 dword ptr [ebp+8] FUNC3_049

Allocate a double word for the real return value. Push the double-word parameter f2, which happens to have the address ebp+8 Now call the function
call

The function leaves its result in the FPU stack, so here' s how the assignment is done. Variable r is at ebp-8.
fstp dword ptr [ebp-8]

Function Opening
Here's how the function func3 opens, in Pascal:
; 9: ; 11: ; 10: FUNCTION func3(f3: integer) : real; VAR P: array [0..2] of real; BEGIN {func3}

FUNC3_047 is the address of the return value (a 4-byte real). Space for the return value is allocated Chapter 13: Functions and Procedures in Pascal, page 377

before the function is called.


FUNC3_047 F3_048 .CODE EQU EQU <[ebp+16]>

F3_048 is the address of the formal parameter f3, after the function setup code is complete
<[ebp+12]>

FUNC3_049 marks the calling address of this function. Notice that FUNC3 is used in two different senses, and requires two separate assembler names.
FUNC3_049 push proc near stlink+12 ebp ebp,esp stlink+12,ebp esp,12

Save the static link. This function is at level 3 Save the previous EBP
push mov mov sub

Set EBP to the current stack offset Set the current static link address to EBP Allocate local variable space for P (12 bytes in this example)

Uplevel Reference to a Variable


Assume that some variable is at nesting level 3, and that we are currently in level 5. Also, assume that the variable is accessed as [EBP-6]. The following three instructions are needed to fetch the variable. Similar code can be used to store a variable, fetch other variable types, etc.
mov mov mov bp,STLINK+12 EAX,[EBP-6] bp,STLINK+20

If the variable is in the current stack frame, the first and third mov instructions can be eliminated.

Function closing
The end of a Pascal function is marked by an END, which matches the BEGIN of the function body:
; 20: END {func3} ;

This should trigger the generation of several instructions, as follows. The fld loads the function's return value into the FPU stack. This is where a floating-point number belongs upon returning from a function call. The stack space allocated for it is for temporary purposes while the function is being executed.
fld dword ptr [ebp+16] esp,ebp ebp stlink+12 8 endp

Remove local variables


mov

Restore EBP
pop

Restore the static link


pop ret FUNC3_049

Remove 8 more bytes from the stack just after the return--the formal parameter and the return value

Chapter 13: Functions and Procedures in Pascal, page 378

Global Variables
In the 80x86 processor, global variables can be allocated from the data segment and referenced directly through a DS register offset. This form of access saves a few instruction cycles on each access, since it's no longer necessary to form a memory address from the value in EBP. No static link references are necessary.

Chapter 13: Functions and Procedures in Pascal, page 379

Chapter 14: Control Structures


W. A. Barrett, San Jose State University nch14.doc

What Is a Control Structure?


A control structure in a programming language is a means of changing the order of execution from the normal sequential ordering. Sequential ordering means that statements are to be executed at runtime from top to bottom in the source, like this:
k := 15; proc(i, j, k); r[22] := 33; { 1 } { 2 } { 3 }

Here, we expect the statements to be executed in the order given, first number 1, then number 2, and then number 3. Not all languages support or require sequential ordering. For example, a make file contains rules that (with a few exceptions) can be in any order. Weve also seen that the production rules in a context-free grammar can be written in any order, again with a few exceptions. There are also languages in which all operations are performed through function calls, with no mechanism of sequential execution provided at all. Pure Lisp is an example of such a language [1, 2]. It is possible to write powerful and sophisticated programs using only function calls, provided that the notion of a function be extended to include a conditional evaluation structure [3, 4]. There are also sections in Pascal and C in which the ordering is arbitrary. For example, declarations can be written in any order. The following set of declarations:
var i, j, k: integer; r1, r2: real;

are equivalent to these:


var r2: real; k, j: integer; r1: real; i: integer;

Executable Code
Within a block of executable code, the ordering of the statements is not arbitrary. In the example at the top of this paragraph, if we interchanged the order of statements 1 and 2, the effect at runtime will most likely be completely different. Having the value of k set before calling procedure proc rather than afterward will make a difference in how the procedures evaluation proceeds. What makes the order important are the side effects of the statements. Each assignment statement changes the value of some variable, and that variables value is important in a subsequent statement. Procedure and function calls may also change variables. File read/write operations also generate side effects either in variable values or in the file. The purpose of a control structure is to change this top-to-bottom ordering of statement execution. The execution order can be changed unconditionally through a goto statement, or conditionally through any of several structured control statements. Typically, a variables value is tested for its current value, then control is transferred to some other statement, depending on the result of the test. Chapter 14: Control Structures, page 380

Pascal Control Structures


Pascal provides these control structures:
goto K if B then S if B then S else S while B do S repeat S until B for k:=E1 to [downto] E2 do S; case E of 1,2,3: S; 5: S; 9,10,11,12: S; otherwise S; end begin S ; S ; S ; ... S end

These are similar to the control statements found in C, though of course with somewhat different syntax. In these examples, letter B stands for some variable or expression that evaluates to a Boolean value, i.e. true or false. Letter S stands for some statement. This can be an assignment statement, a procedure call, or any one of the control statements given above. Since S can be a control statement, we see that control statements can be nested indefinitely, like this:
if B then if B then while B do S;

Quite complicated and very general programs can be built up in this way. One of the principles of structured programming is to avoid or completely eliminate the goto control form. Its possible to organize the logic of any program in such a way that no use of the goto statement is required. That's done by selecting an appropriate set of nested control statements. Letter E stands for some variable or expression that evaluates to an integer value. This appears in the case statement as the tested variable. The case statement is similar to the switch statement in C. Depending on the value of E at runtime, control will be transferred to one of the statements within the of .. end section. The labels 1,2,3, 5, 9,10,11,12 determine which statement will be executed. If the value of E doesnt match any of these, then control passes to the statement following the end, or (in this example) to the statement labelled otherwise. The for statement executes statement S for a range of integers whose limits are the two expressions E1 and E2. Variable K must an integer, enumerated type or subrange. Statement S is first executed for K equal to value of E1. Its then incremented (or decremented) by 1, tested against the upper limit E2, and S will again be executed if the limit isnt exceeded. Expressions E1 and E2 are evaluated just once; E2 is evaluated after the assignment to K is made. Statement S may depend on the variable K as well as other variables, of course. The begin ... end control structure provides a way of grouping a sequence of statements together. Its used in a way similar to the { . . . } bracketing found in C.

Chapter 14: Control Structures, page 381

Procedure and Function Calls


A procedure or function call is also a form of control statement. When a procedure or function F is called, control is transferred to the first statement in the block of code associated with F. This will usually be enclosed in a begin .. end pair found after the functions header, like this:
procedure p1(k: integer; r, s: real); var i1: integer; { other declarations } begin { where the procedure code starts } i1 := 2*k-15; end; { where the procedure code ends }

The runtime system remembers the location of the call, so that when the end of the functions code is reached, control will return to the statement following the call. This remembering action is done by pushing a return address on the runtime stack. The ret instruction will later pop the return address and transfer control to it, as we've described in chapter 13. A function or procedure in Pascal returns by passing control into the end of its code block. Pascal provides no return keyword as found in C, so its up to the programmer to arrange the control flow to pass into the end in order to return. If the programmer has avoided the use of goto, this will happen automatically through the normal nesting and other actions of the statement comprising the functions code.

Path of Execution
Pascal, C, and other procedural languages are designed to maintain total control of the execution sequence. By this, we mean that the language is such that the next thing to do is always well defined. Its not possible to write a program in which execution just falls off the edge of the world into an undefined region of instruction memory. (There are exceptions. Programs which misuse pointers or exceed array bounds can damage the runtime stack or instructions, which in turn may cause a loss of execution sequence control.) Note that this isnt true of an assembly program, in general. Its easy to forget to include a ret instruction at the end of a procedure block. At runtime, the CPU will just blindly read whatevers in the following memory locations and attempt to execute what it finds there as instructions. A processor is designed to always execute instructions in memory sequence, regardless of the consequences (unless the instruction is unimplemented or a branch). These may be data bytes in your program, or pieces of instructions from some other program that was left over. Execution will continue until an illegal (unimplemented) instruction appears, or the instruction is outside the legal memory bounds, causing an operating system trap. When an illegal instruction is fetched, an interrupt calls an operating system function that terminates your program with a message such as unimplemented instruction. Even then, the microprocessor continues to fetch and execute instructions; its just that these are part of the operating system, in a carefully crafted environment of instructions. A high-level languages syntax and semantics are designed to avoid control problems of that sort. Of course, this only applies to a syntactically correct program thats been correctly compiled, and that has no serious pointer or array bugs at runtime. The measures taken to prevent falling off the world are these: Theres a defined next statement in every situation. The runtime stack is carefully managed during execution so that the return address pushed in a function call will be in exactly the right place (at the stack top) when returning from the function. This requires careful attention to the use of the runtime stack in every operation. Every push must

Chapter 14: Control Structures, page 382

have a corresponding pop later on, and any compiler is expected to satisfy this requirement exactly. This will guarantee that the ret instruction will always fetch a valid return address from the runtime stack, and not some garbage word. At the end of the programs execution, control is returned to the operating system in an orderly manner. This usually requires that the program be started through a special protocol so that the return can be gracefully done.

Compiling Control Statements


The task that confronts us as a compiler designer is to translate each of the Pascal control statements into sequences of target assembler code. The Pentium processor (and most other modern microprocessors) support control flow through a set of branch instructions. These are designed to test a flag bit or a combination of flag bits. Depending on the flag bit state, control is either transferred to some other code location, or to the following instruction. The flag bit state is typically set through a comparison instruction, which determines whether its two operands are equal, not equal, less than, etc. Thus the comparison instruction has no idea just how the comparison will be used, and the later branch instructions have no idea of how the flag bits were set. The branch location address is part of the branch instruction at runtime, usually a word or a double word. In the Pentium, this number usually is relative to the branch instructions location. Thus if the branch instructions location is (say) 2580, and the target address is 2600, then the relative address is 20 (2600 2580). This helps make the code relocatableit no longer matters just where the program is loaded in memory. Since all branches are relative to a current instruction, they dont have to be changed just because the program may be loaded starting at (say) address 0x46FF0 rather than 0x5733A. We are generating symbolic assembler, so we dont need to worry about just how the addressing works. Instead, we can just assign a symbolic address (that is, a name) to the distant location, and let the assembler work out the details. So our symbolic assembler target code will look like this:
; do something to set the flags jle $L_004 ; jump to location $L_004 if the result is <= ; do these instructions if the result is > ; ...more instructions $L_004: ; mark the distant location ; ...more instructions

It should be clear from this example that the assembler considers $L_004 as an identifier. The dollar sign is just part of the name, and makes these names distinct from any user identifiers. This name, $L_004, will be manufactured by the compiler on demand. Its just the string $L_ concatenated with a three-digit number, so we can generate many distinct labels as needed. In this code fragment, the jle can transfer control to either of two locations: the one specified in the instruction, or the next location. The compiler's responsibility is to make sure that both target addresses are covered by more valid instructions. We therefore arrive at a general pattern for all the control statements in which a Boolean is tested (if, while, for, repeat), which is to: 1. generate code to evaluate the Boolean expression, 2. test the Boolean expression through cmp or some other instruction that sets the flag bits, 3. use a conditional branch instruction to implement the control logic. The first two steps can often be combined. For example, if the Boolean expression is a simple variable bx13, then the first two steps can be combined into the single instruction
cmp bx13,1

If bx13 is true at runtime, i.e. has the numeric value 1, then the sign flag SF will be 0 and the zero

Chapter 14: Control Structures, page 383

flag ZF will be 1 just after the cmp is done. If bx13 is false at runtime, i.e. has the numeric value 0, then the sign flag SF will be 1 and the zero flag ZF will be 0. We can use either or both of these flag bits in various ways. Weve chosen to use the jne instruction (jump on not equal) for this kind of test in much of our generated assembler code. jne will branch on a true (the Boolean is equal to 1), and fall through on a false (the Boolean is not equal to 1). This will perform the third step. Well see that more optimizations are possible if the Boolean expression is some combination of and, not and or operators. The logic of the test can be organized, using our AST framework, to minimize the number of tests and branches required.

Compiling the if-then Statement


Consider the one-sided if-then statement, whose pattern is
if B then S;

The statement S can be simple or complicated. It doesnt matter, since we expect that some other part of the compiler will reduce it to some sequence of assembler instructions -- we only need to know where the instructions for S start and where they end. Also, the Boolean B may be a simple variable or something more complicated. If its a variable, weve seen that the single instruction cmp variable,1 will set the flags. If its a more complicated expression, we need to explore various optimization opportunities, however, for now, lets assume that the compiler knows how to generate a sequence of code that leaves the value of the Boolean B in register AL, as a 0 or a 1. We can then issue the instruction
cmp al,1

after the code to evaluate the B expression. This cmp will of course set the flag bits. So we will have a sequence of code for the B part, like this:
; evaluate the Boolean B, setting flag bits by comparing to 1

We have another sequence for the S part, which we'll write like this:
; evaluate the statement S

We can now insert two more assembler instructions that perform the if-then logic surrounding these two components. We clearly need to add a branch instruction and a target, like this:
; evaluate the Boolean B, setting flag bits by comparing to 1 jne $L_022 ; evaluate the statement S $L_022:

This clearly implements the if-then statements logic, as well now show. Suppose that the Boolean B evaluates to a true, i.e. the value 1. Then the jne falls through to the code for S. That will be evaluated in sequence, and the instruction following it will be whatever follows the if-then. Suppose the Boolean B evaluates to a false, i.e. the value 0. The jne branch will then be taken, skipping the code for S. As before, the next instruction executed will be whatever follows the ifthen. Well explore variations on this pattern as an optimization means. However, we are required by the nature of the instruction set for the Pentium to follow this basic pattern of setting flag bits, then branching on a flag state. Of course, we also need to be careful to not change the flag bits in between, which would be easy to do through some other instruction.

Chapter 14: Control Structures, page 384

Compiling the if-then-else Statement


The if-then-else statement is written this way in Pascal:
if B then S1 else S2;

The idea here is to execute statement S1 if the Boolean evaluates to true, skipping statement S2. If the Boolean evaluates to false, we want to skip statement S1 and execute statement S2. The compiler will of course generate some sequence of assembler instructions for each of the statements S1 and S2, the details of which neednt concern us here. We clearly need the following framework for this control statement:
; evaluate the Boolean B, setting flag bits by comparing to 1 jne $L_023 ; evaluate the statement S1 jmp $L_024 $L_023: ; evaluate the statement S2 $L_024:

Notice that we need two statement labels for this, one to mark the start location of the S2 block ($L_023), and another one to mark the location of the code following this statement ($L_024). We dont need a label to mark the start location of the S1 block, since this is the fall-through of the jne instruction. The jmp $L_024 causes statement S2 to be skipped after executing statement S1. Of course, the jne $L_023 skips statement S1.

Compiling the while-do Statement


The while-do statement has this pattern:
while B do S;

Here, we test B. If true, statement S is executed, then we test B again, etc. When B first tests false, S is skipped, and control is transferred to the following statement. We clearly need the following assembler framework to implement this. It requires two statement labels:
$L_048: ; evaluate the Boolean B, setting flag bits jne $L_049 ; evaluate the statement S jmp $L_048 $L_049:

The logic paths should be clear by examining each of the alternatives: if B is false when the statement is first entered, the jump to $L_049 will skip statement S, passing control to the following statement. if B is true when the statement is first entered, statement S will be executed. The jmp following S will send control back to the top to evaluate B again, repeating this process. statement S is therefore executed repeatedly until the Boolean B finally evaluates false, in which case, control is passed to the following statement.

Compiling the repeat-until Statement


This statement,
repeat S until B;

is very similar to the while-do statement. Statement S is evaluated before testing B. The B test then becomes a decision whether to evaluate S again or not. We clearly want the assembler framework given next:

Chapter 14: Control Structures, page 385

$L_56: ; evaluate statement S ; evaluate Boolean B, setting flag bits by comparing to 1 jne $L_56

Notice that we only need one instruction label. Also notice that we wish to return to the top on a false Boolean value. Recall that weve chosen to evaluate the Boolean B through an instruction like this:
cmp B,1

for which a false B will cause the jne to branch rather than fall through.

Compiling the for Statement


The Pascal pattern is either of these two:
for k:=E1 to E2 do S; for k:=E1 downto E2 do S;

These require somewhat more work than the previous control structures. We need to supply an incrementation or decrementation for the variable k, which must be a simple integer variable. The expressions E1 and E2 need to be evaluated. The value of E1 can be immediately assigned to variable k. E2 must be evaluated on each iteration, including the first, so it could depend on variable k or on some variable changed in S. Variable k must be compared against the E2 value on each iteration, including the first one. We note that the statement S may never be executed if the range is empty. Notice that the fragment
k:=E1

is equivalent to the assignment statement, except for the constraint that k must be a simple integer identifier, not something more complicated. So if we have an compiler function to process an assignment statement (which we do), we should use it here. Note that the assignment is done just once, while the comparison and E2 evaluation is done on each iteration. Pascal also requires that the variable k be equal to the endpoint value of E2 upon completing one or more iterations. Unfortunately, it isnt clear what k should be if no iterations are done. Weve assumed that it will be E1-1 in the to case and E1+1 in the downto case. This is consistent with the one-or-more iteration case, and leads to simple code. Heres the required assembler framework for the to case:
; evaluate E1, leaving result in register EAX mov k,eax $L_061: ; evaluate E2, leaving result in register EAX cmp k,eax jg $L_062 ; evaluate S inc k jmp $L_061 $L_062: dec k

Notice that for no iterations, k is decremented by one. This can only happen if E2 < E1. So instead of setting k to E2 in this case, which would require additional coding, we set it to E1-1. In all other cases, i.e. E2 >= E1, ks exit value is E2. The downto case is similar. Notice the difference in the conditional branch and the inc and dec instructions:
; evaluate E1, leaving result in register EAX mov k,eax

Chapter 14: Control Structures, page 386

$L_061: ; evaluate E2, leaving result in register AX cmp k,eax jl $L_062 ; evaluate S dec k jmp $L_061 $L_062: inc k

We leave an analysis of this framework to the reader, who should be able to show that it satisfies the Pascal rules for this statement form.

Compiling the case Statement


The most complicated control statement is the case statement, which follows this pattern:
case E of 1,2,3: S1; 5: S2; 9,10,11,12: S3; otherwise S4; end

The production rules guarantee that this syntax is followed in a general way. That is, The case, E, of, end elements will all be in place. The numeric labels, commas, colons, and statement syntax is correct. Only one otherwise appears, and (if present), its the last one in the labelled set. However, the following are not checked through the production rule syntax checking process: The statement label types must be compatible with the type of E. For example, if E is type integer, then each of the labels must be an integer constant. If E is an enumerated type, then each of the labels must be compatible with that type. If E is type char, then each of the labels must be a small integer or quoted character. A statement label should not appear more than once. These cannot be controlled through the production rules of the grammar. Type checking is generally done through the help of a symbol table mechanism. As to statement labels appearing more than once, we need something like a local symbol table with which a duplicate label can be detected. Thus, in addition to generating code for this form, we need to separately verify these two constraints.

Three Different Implementation Approaches.


We can implement a case through a sequence of test-and-branch instructions, through a branch table, or through a binary tree test-and-branch. The test-and-branch approach would look like this in assembler, using the above example:
; evaluate E, leaving the result in EAX jmp $L_007 $L_008: ; evaluate statement S1 jmp $L_011 ; skip to end of the case $L_009: ; evaluate statement S2 jmp $L_011 $L_010:

Chapter 14: Control Structures, page 387

; evaluate statement S3 jmp $L_011 $L_007: cmp eax,1 je $L_008 ; statement S1 cmp eax,2 je $L_008 cmp eax,3 je $L_008 cmp eax,5 je $L_009 ; statement S2 cmp eax,9 je $L_010 ; statement S3 cmp eax,10 je $L_010 cmp eax,11 je $L_010 cmp eax,12 je $L_010 ; evaluate statement S4, the otherwise case $L_011:

Notice that the evaluation of E appears first, followed by the code for the statements. These happen to be in the same order as they appear in the Pascal source code. It doesnt matter how these statements are ordered, given that two branches are needed to reach them in any case. So they might as well be coded in the order in which they appear in the source program. The compiler needs to insert a branch label and a jmp instruction as a framework for each statement, of course. That makes it possible for this part of the compiler to operate in a code-as-you-go or onepass mode. However, the branching tests are deferred until all the statements have been processed. So after evaluating E, control transfers to a series of cmp je (test and branch) instructions, one for each case label. The first comparison that succeeds causes a branch to its associated statement code. When that statement execution is complete, control passes to $L_011, found at the end of the case statement. If none of the comparisons succeed, control passes to the otherwise statement, and thence to the end of the case, with no further tests required. Criticism: While a long sequence of test-and-branch instructions is easy for the compiler to generate, it has the obvious disadvantage of being inefficient at runtime. (It has O(N) complexity). If the most popular statement is near the end of the case, the code requires testing the expression value against each of the preceding case labels. For a large case statement involved in a heavily executed loop, the performance hit can be very obvious. An optimizing compiler should do a better job in organizing this control problem.

Binary-Tree Test and Branch


A third very attractive approach to case encoding, suggested by Robert Fraley, starts by collecting all the case labels, sorting them by increasing order, then forming a balanced binary tree from them. At runtime, the case expression value V is compared against the root label of the tree R. If V == R, then we've found the correct statement, and can branch to it. If V < R, then we go to the left node of the root and repeat the test. Otherwise, we go to the right node of the root and repeat the test. The same test is made on each node of the tree. If a left or right node doesn't exist, then the otherwise case applies. What makes this approach attractive is that the label values can be very sparse without demanding

Chapter 14: Control Structures, page 388

excessive memory space. Also, the average number of accesses in the tree is approximately log2(N), where N is the number of labels. This is considerably better than the performance of a linear test-andbranch sequence. This method can be embellished by testing for a range of case labels. The Pascal syntax can easily be extended to support a subrange as a case label, for example,
22..27:

rather than
22,23,24,25,26,27:

Given a subrange, this method can be extended to look for existence within a range at any one node by testing separately against the lower and the upper bound. This strategy has O(log N) complexity, in general, and does not depend on the sizes of the labels, which may be some mixture of very large and very small numbers.

Using a Branch Table


An alternative to test-and-branch coding of a case statement is using a branch table. A branch table is an array of addresses of the case statements, arranged so that a simple indexing operation provides an immediate transfer to the desired statement. The idea is to reduce the case expression E to a branch table index, look up an address in the branch table, then jump to that address. The time required to transfer to any of the case statements will therefore be the same, regardless of how many statements or labels there are, i.e. the complexity will be O(1). Look at our example case statement. The labels range from 1 to 12, with a few omissions (no 4, 6, 7 or 8 label). So we need an array of 12 entries, each one carrying the code address of a statement. The entries corresponding to labels 1, 2 and 3 will point to statement S1. The entry for label 5 will point to statement S2. The entries for labels 9-12 will point to S3. All the others will point to the otherwise statement S4. We also need to send control to the otherwise statement S4 if E is less than 1 or greater than 12, and this test must precede attempting to access the branch table. Switching control to one of a set of addresses in an array can be done with the jmp [ebx] instruction. This expects a data offset in ebx, then looks up an offset in the table, finally branching to that offset. Heres the framework for the case example given above, using a branch table. This code should replace the code following the statement coding, but is executed first at runtime:
; (value of E is in EAX) cmp EAX,12 ; the largest label jg $L_012 ; go to the otherwise statement S4 if too large cmp EAX,1 ; the smallest label jl $L_012 ; go to S4 if too small sub EAX,1 ; form a 0-based index shl EAX,2 ; EAX*4 to form a byte offset lea EBX,$L_092 ; address of the branch table add EBX,EAX ; address of the desired statement address jmp [EBX] ; transfer control to the desired statement .DATA $L_092: ; start of the branch table; has to be in DATA space dd OFFSET $L_008 ; label 1, to S1 dd OFFSET $L_008 dd OFFSET $L_008 dd OFFSET $L_012 ; label 4, to S4, the otherwise case dd OFFSET $L_009 ; label 5, to S2 dd OFFSET $L_012 dd OFFSET $L_012 dd OFFSET $L_012

Chapter 14: Control Structures, page 389

dd dd dd dd .CODE

OFFSET OFFSET OFFSET OFFSET

$L_010 $L_010 $L_010 $L_010

; label 9, to S3

; label 12, to S3

Criticism. The number of instructions required at runtime to reach any of the branch statements is now essentially the same, a small number, regardless of the number of branch statements. This can provide a major improvement in runtime performance, compared to the test-and-branch approach. However, theres a price to be paid for this, and thats in the size of the branch table, if the statement labels are sparse and widely separated in value. For example, our case statement might have only 3 labels, with values 30000, 0, and 30000. A branch table would have to contain 60,001 entries of 4 bytes each, and nearly all are directed to the otherwise clause. That's an enormous amount of data space to allocate for such a simple operation. So this approach is obviously inappropriate when the case labels form a sparse set of integers, i.e. a large overall range, but only a few labels within the range.

Optimizing Boolean Expressions with Branches


A Boolean expression in Pascal results from some combination of these operators and operands: comparison of two integers or reals through one of the operators <, >, <=, >=, =, or <>, which effectively yields a Boolean value. For example: x <= y. use of a Boolean variable, constant or literal. use of one of the logical operators and, or, not. These require two or one Boolean operands and yield a Boolean. For example: b and not x>y. use of the in operator. This tests an integer value for its membership in a power set, and yields a Boolean. For example, k in s, where k is an integer and s is a set of integers. A Boolean expression is used in two different ways in Pascal: as a way of forming a Boolean value to be assigned to a Boolean variable or passed by value to a function. For example, b := x < y; as a way of deciding on a branching condition in a control statement. For example: if x< y then S1 else S2; A Boolean expression can be evaluated in either of these essentially equivalent ways: by using the processors logical operators, i.e., the and, or, xor, and not instructions. These form a bitwise logical combination of two or one operands. For example, an and of two Boolean variables A and B can be generated by the following instructions:
mov and al,A al,B

by using branch instructions. Here, an and of two Boolean variables A and B can be generated by the following instruction sequence:
mov al,0 cmp A,1 jne $L_008 cmp B,1 jne $L_008 mov al,1 $L_008:

Chapter 14: Control Structures, page 390

This starts by setting register al to 0 (false). The compare and branch logic essentially guide the control either into or around the second mov instruction, which sets al to 1 (true). The result is a Boolean value in register al. Its clear that when we need the result of a Boolean expression as a Boolean true/false value, coding logical operations is more efficient. The logical instructions are also faster than the corresponding branch instructions, especially in processors that support instruction caching, since a branch instruction can potentially invalidate an instruction cache and require fetching a new one, while the logical operations are fast and do not upset instruction caching. However, the use of branch instructions for logical operators is very useful in control structures, and will often result in less code and more efficient code than logic instructions. The reason is that the Boolean expression in a control structure is expected to result in a branch, so it makes little sense to generate a Boolean value somewhere that must then be tested to generate a branch. For example, if the Boolean expression A and B is used in an if-then statement, like this:
if A and B then S1;

an optimal instruction sequence would be this:


cmp A,1 jne $L_010 ; A is false, so we dont need to test B cmp B,1 jne $L_010 ; B is false ; instructions for statement S1 ; both A and B are true $L_010:

If we instead insisted on using the and instruction to first evaluate A and B as a Boolean, then did the tests, wed have this code:
mov al,A and al,B ; form a Boolean product cmp al,1 ; test for the sake of a branch jne $L_010 ; do the if branch ; instructions for statement S1 $L_010:

The number of instructions is the same in this case, but we could argue that the branching instructions may be slightly more efficient since B is never tested if A tests false. The fact that B isn't tested if A is false in fact is a language requirement in both Pascal and C. B may be a function call with certain preconditions or side-effects, and we may not want to call it if A tests false. So we are often left with no alternative than to use a branching control structure. A more convincing case is one in which the comparison operators appear along with some Boolean operators, for example,
if (x < y) and (y < z) then S1;

Assume that the variables are integers. Then, an optimal logical encoding would look like this:
mov eax,y cmp x,eax mov ax,1 jl $L_076 xor ax,ax $L_076: push ax mov eax,z cmp y,eax mov ax,1 jl $L_077 xor ax,ax ; form a Boolean 1 or 0 from the comparison x<y

; form a Boolean from y<z

Chapter 14: Control Structures, page 391

$L_077: pop dx and ax,dx ; form the and cmp ax,1 ; test the Boolean and jne $L_078 ; instructions for S1 $L_078:

Notice that forming a Boolean value from a comparison of two numeric values requires a branch by itself. (This is in fact a criticism of the Pentium architecture, in that it provides no fast way to convert a comparison result into a Boolean). Compare that long sequence to the following branch encoding of the same statement:
mov eax,y cmp x,eax jg $L_077 ; x<y is false mov eax,z cmp y,eax jg $L_077 ; y<z is false ; instructions for statement S1 $L_077:

Its clear that for a certain class of expressions, branch encoding is superior to logical encoding in performance. These are expressions that make use of comparison operators as used in a control statement conditional. Other forms of Boolean expressions appear to be best encoded with logical instructions. These two evaluation approaches appear to be radically different, but in fact can be combined to form a reasonably well optimized approach to encoding. An optimal approach is summarized as follows: All expressions should be formed as an abstract syntax tree, with no code generated until the purpose of the expression is evident. In a bottom-up parser, the expressions in a control structure are parsed and formed before the control structure is evident, so its important that they be collected into an AST. When an expression is used as the branch control of a control structure, i.e. a Boolean used for branching, its probably best to evaluate it as branching code rather than as a Boolean expression. This is particularly true if one or more comparison operators appear in the tree. If there are no comparison operators, only variables connected with logical operators, then an expression evaluation may be more efficient. When an expression is used to form an index, or a value to be passed to a function, or a value in the right member of an assignment statement, logical operators are likely the most efficient. In our compiler, weve chosen a simple approach: branching code is generated for all control structures and logical code for everything else. Some expression forms are not necessarily the most efficient, but this is a reasonable compromise between compiler complexity and runtime efficiency.

Generating Branching Code


As weve discussed above, branching code uses the flags and conditional branch instructions to encode a Boolean expression. We also claim that this is usually more efficient, or about equally efficient, as using the built-in bitwise logical instructions in many cases. Our overall plan is to carry any expression as an abstract syntax tree. Weve seen that this facilitates constant folding and other optimizations. It also facilitates optimizing branching code. The production rules for any expression are rooted in the nonterminal Expr. (see file Pascal.grm, found in directory pascal5). This derives expressions using any of the conditional operators <, >, <=, etc., the arithmetic operators +, -, *, /, DIVand MOD, the logical operators AND, OR, NOT, and the Chapter 14: Control Structures, page 392

simple and compound variables (named variables, literals, array dereferencing, record field dereferencing, and pointer dereferencing). The precedence of these is controlled by the way in which the production rules are structured, but that only affects the ordering of these operators as they are built into the AST. Once an AST is complete for some expression, other production rules determine just how it should be evaluated. For example, the rule
AssignStmt : LVariable ":=" Expr #ASSIGN

is itself built into an AST and later evaluated through a call to evalStmt. This function is designed to evaluate assignment statements and procedure calls. It evaluates the Expr AST in logical fashion by encoding all Boolean operations as bitwise logic instructions. Theres no need to generate conditional branches, since they will usually be less efficient than bitwise logic. Now look at the rules
Stmt : IF IfExpr Then Stmt #IF_STMT IfExpr : Expr #IFBOOLEAN

We clearly want branching-style evaluation of Boolean operators in the IfExpr AST, since thats the whole purpose of this expression form. But that applies to the top levels (near the root) of the AST, and only for these operators:
< > <= >= = <> AND OR NOT IN

We also generate branching code if IfExpr is a simple Boolean variable or a constant. For any other operator found at the root level, we use function eval which generates a Boolean value (0 or 1), leaving the result in register AL. It happens that no other top-level Boolean operator is possible. The Pascal type rules require that IfExpr must be type Boolean, and these operators exhaust all those that yield a Boolean. Simple variables must also be type Boolean. Below the root level in an AST, we may find other types. For example, two reals can be compared. Parenthesizing can be used to form more complicated expressions that contain other Boolean operators, for example,
(b1 < b2) = true

which contains the Booleans b1, b2 embedded in a conditional. These are fetched into AL through the eval function since they lie deeper in the AST. The top-level operator is =, and it will be evaluated by branching code.

Function evalControl
Boolean branching code is generated by function evalControl, found in file eval.cpp. This requires two parameters, a statement label number label and a Boolean ifTrue. The general idea is that this function is supposed to generate a conditional branch based on the current AST node (the Ceval object bound to this function call). Control will either fall through to instructions following those generated by this function, or to the label. How control transfers depends on ifTrue. Suppose that ifTrue is true. Then the conditional branch is expected to fall through on a true Boolean, at runtime, and to branch to label on a false Boolean. On the other hand, if iftrue is false, the opposite should occurthe conditional branch is expected to fall through on a false Boolean, and branch to label on a true Boolean. Well discuss how to use this powerful function next through some examples.

Unary NOT Operator


To see how evalControl helps solve some interesting branching problems, consider how evalControl can be used with the NOT operator. Heres the relevant section of code as found in eval.cpp, function evalControl:
case NOTOP:

Chapter 14: Control Structures, page 393

one()->evalControl(label, !ifTrue); break;

Function one() returns a pointer to the leftmost child of this node (the only child, since not is unary). This will be the expression covered by the not operator. Given that evalControl does the right thing for parameters label, and ifTrue, then it should generate the reverse branching code for parameters label, and !ifTrue. So the not operator in fact costs us nothing in instructions in the branching case, whereas each one will usually generate a not instruction in the bitwise case.

Binary AND Operator


A more interesting case is the binary AND operator, whose evalCompare code is:
case ANDOP: if (ifTrue) { one()->evalControl(label, true); two()->evalControl(label, true); } else { int fallthruLabel= currentParser->newLabel(); one()->evalControl(fallthruLabel, true); two()->evalControl(label, false); genLabel(fallthruLabel); } break;

Consider the case ifTrue == true first. There are two Booleans to consider at runtime, the two AND operands. Call them b1 and b2. In this case, we want the generated code to fall through if b1 and b2 is true, and to branch to the label if b1 and b2 is false. Now b1 and b2 can only be true if both of the Booleans are true, and is false otherwise. The code therefore should test the first Boolean, b1, falling through if its true, and branching if its false. The fall-through code is generated by the second evalControl call, which does the same with the second Boolean, b2. We therefore fall through if both the Booleans are true, but will branch if either of them is false. Notice that if the first one is false, we dont need to test the second one. The generated code will therefore look like this:
cmp b1,1 jne label ; goto label on false cmp b2,1 jne label ; goto label on false (b1 and b2 == true case) label: (b1 and b2 == false case)

Clearly this falls into the true case only if both b1 and b2 are true, and branches to label if either are false. One detail will be noticed by the readerwhat happens after the true case code is executed? Does control pass into the false case? No, this wont happen, but only because we will issue additional unconditional jumps and labels at a higher level. If another branch is required, that will be issued through an analysis of the relevant control structure. Thus the one-sided if-then doesnt need another branch, while the two-sided if-then-else does require one. The case ifTrue == false is somewhat different. What we expect overall is for the code to fall through if b1 and b2 is false, but to branch to label otherwise. An efficient way to do this is to test the first Boolean b1 through a conditional branch. However, this time, we want to branch if the result is true instead of falling through. The branch will take us to the second Boolean test. To carry this out efficiently as branches, we need to introduce another label. The pattern looks like this:
cmp b1,1

Chapter 14: Control Structures, page 394

jne label_1 cmp b2,0 jne label label_1: (b1 and b2 == label: (b1 and b2 ==

; goto label_1 on true ; goto label on false false case) true case)

We note that when b1 and b2 is false, either b1 or b2 may be false. If b1 is false, the first conditional jump goes to label_1, which is effectively a fall-through. If b1 is true, and b2 is false, the second conditional jump is not taken, and we fall through. In any other case, control transfers to label. Notice that although weve introduced another label, the total number of instructions required to carry out the branching is still four. Additional labels cost nothing. Weve succeeded in arranging our tests and conditional branches in a way that minimizes the number of tests and branches required.

Binary OR Operator
The branching code generator for the OR operator is similar to that for the AND operator, as follows:
case OROP: if (ifTrue) { int fallthruLabel= currentParser->newLabel(); one()->evalControl(fallthruLabel, false); two()->evalControl(label, true); genLabel(fallthruLabel); } else { one()->evalControl(label, false); two()->evalControl(label, false); } break;

As before, the ifTrue case is supposed to provide fall through branching on a true Boolean. The code sequence for b1 or b2 is:
cmp BYTE PTR B1,0 jne $L_28 ; the fall-through label cmp BYTE PTR B2,1 jne $L_27 $L_28: ; true case $L_27: ; false case

Here, a true b1 causes a jump to the fall-through label without testing b2. A false b1 forces a test of b2, which does the correct thing. The !ifTrue case generates a code sequence like the following:
cmp BYTE PTR B0_051,0 jne $L_39 cmp BYTE PTR B1_052,0 jne $L_39 ; false case $L_39: ; true case

We want a fall-through if b1 or b2 is false. To actually get a fall-through, both need to be tested for 0.

Chapter 14: Control Structures, page 395

Comparison Operators
The six comparison operators (<, >, <=, >=, =, <>) have a pair of operands that may be any of a variety of types. We can compare Booleans, integers, enumerated types, reals, strings and sets. The comparison work is done through the service function evalCmp, which also knows about this tree node, but is passed a semType anyway. The branching work is done by the service function condJump. The comparison function doesnt need to know about the ifTrue parameter, but condJump does. Heres the section of code as found in file eval.cpp, function evalCompare:
case LESS: case GTR: case LEQ: case GEQ: case EQ: case NEQ: { semType st= evalCmp(getsemType()); genCode(condJump((ifTrue ? reverseSemt(st) : st)), lbl); } break;

Notice that function condJump expects a semType that reflects both the comparison operator and the state of the ifTrue parameter. For a false ifTrue, we pass the reverse semType, returned by function reverseSemt. A less-then becomes a greater-than-or-equal, and conversely. An equal becomes a not equal. Thus any logic conveyed through the ifTrue parameter gets built into the choice of conditional branch. The condJump function merely translates the semType into the corresponding branch instruction. Thus the semType LESS becomes the string jl, NEQ becomes jne. genCode then writes this instruction to the target file, along with a symbolic label derived from the integer lbl. Here are some examples that illustrate how all this works. These can be found in file pascal5\t15.pas:
; 23: if r1 > r2 then writeln('OK 1') else writeln('NOT ok 1'); fld R2_052 fld R1_051 call fcompare jle $L_30

Here, r1 and r2 are both type real, and require a little help from the runtime helper function fcompare. The two fld instructions push the real numbers into the FPU stack. fcompare performs a floating-point comparison, popping them from the stack, and setting the CPU flags. The FPU instruction fcompp in fact does most of the work, however, it sets flags in the FPU, not the CPU, so a few more instructions are required to set the CPU flags to correspond to the comparison. The jle instruction corresponds exactly to the inverse of a > comparison. That is, we want a true comparison to fall through. The next example illustrates the power of our evalCompare function. Recall that any use of the NOT operator can be coded as branch instructions with no additional instructions. We simply arrange for the correct choice of conditional branches:
; 38: if not (r1 > r2) then writeln('OK 14') else writeln('NOT ok 14'); fld R2_052 fld R1_051 call fcompare jg $L_82

Here, the not operator is folded into the choice of branch instruction (jg instead of the expected jne), Chapter 14: Control Structures, page 396

so the fall-through case corresponds directly to a > comparison. Several comparisons can be combined with AND, OR and NOT through branching instructions, yielding reasonably optimal code, for example,
; 60: if (r1 > r2) or (c1 < c2) then writeln('OK 31') else ... fld R2_052 fld R1_051 call fcompare ; compare r1 and r2 jg $L_147 ; r1 > r2 test mov AL,C2_054 cmp C1_053,AL ; compare c1 and c2 jge $L_146 ; c1 < c2 test

$L_147: ; the true case $L_146: ; the false case

As you can see, the comparisons and logic are folded neatly into an optimal code sequence.

Double Logical Complement


An expression of the form
not not b

is of course equivalent to b. This double complementation can be detected during the bottom-up construction of the AST as an optimization. Function foldConst (in file Evconst.cpp) looks for such optimizations and reduces the AST correspondingly. This particular combination is detected and reduced in the following section of foldConst:
if (one()->getsemType() == NOTOP) { // double negative ret= (Ceval*) one()->one()->unlink(); deleteOne(); break; }

Here, the current AST node is a NOTOP. We see if its child is also a NOTOP, in which case we can replace this node by the grandchild (one()->one()). The pattern here, as in other sections of foldConst, is to set ret to some AST node that we want the current node to be replaced by. (The default is this node). We unlink it so that the current one (and all its descendants) can be deleted without deleting the grandchild. This operation also reduces any number of not operators, effectively eliminating all pairs of such operators. A competent programmer should never write a double negation or logical complement. However, they might appear in source code generated by some higher-level tool, or through a macro expansion.

Optimizations Not Done


Several optimizations are not done in the pascal5 compiler, as follows: Comparisons of identical expressions should be reduced to a true or false constant. One might see if the children of a comparison node turn out to be equivalent. This of course requires some work, since the expressions will be form equivalent, not pointer equivalent. Also, if a function call is part of each expression, the compiler cannot be sure about side effects that would render the two seemingly equivalent expressions different at runtime. Some combinations of AND, OR and NOT could be algebraically reduced into simpler forms. This would require an AST analyzer that looks for various combinations of these operators, with

Chapter 14: Control Structures, page 397

a view to reducing the instruction count, of course.

Branching Code from Production Rules


We can now tackle the last phase of our control structure compilationhow to manage the production rule semantics such that optimal branching code will be generated. The overall philosophy used in pascal5 is to build an AST for each expression and assignment statement. An AST is not built for a sequence of statements or for any of the control statements. This decision means that we cannot easily develop an optimization strategy that deals with more than one statement, for example, one in which registers are optimally assigned to variables over a block of statements.

Compiling an if-then Statement


Under this philosophy, lets see just how a simple control statement must be handled. Heres the obvious production rule for the one-sided if-then statement:
Stmt : IF Expr THEN Stmt #IF_STMT

If we did nothing else, heres the situation when this production rule is called up in our bottom-up parser. Recall that when any production rule pops up, all of its nonterminals have already been scanned: The Expr nonterminal will have been converted into the root of an AST representing the Boolean conditional expression. No code will have been generated, since that requires calling eval. However, the expression tree will have been reduced to some extent, for example, by folding constants and eliminating useless operations. The Stmt will have been expanded into target code. Code will have been generated for this, since in general we generate statement code as it appears in bottom-up order, rather than saving it in an AST. This is clearly unsatisfactory. We need to generate some code from the Expr part prior to the code generated through Stmt, so that a conditional branch instruction can decide whether to execute the Stmt. But that doesnt happen as the rule stands. Having the Expr code follow the Stmt code would normally be OK, except that we still need some way to branch around the Stmt code to the branching test on Expr. Fortunately, its easy to interrupt the parsing process with another production rule. Well replace the Expr nonterminal with a new one (well call it IfExpr), like this:
Stmt : IF IfExpr THEN Stmt #IF_STMT

Well then need to add the rule


IfExpr : Expr #IFBOOLEAN ;

This new rule will be triggered before any of the Stmt code is generated. It therefore provides us with an opportunity to generate the branching code we need and provide other services, as well now explain. Our new rule IFBOOLEAN is clearly the place in which to evaluate Expr. We know that its evaluated as the Boolean expression of an if, so we can call evalControl on it, rather than eval. Weve seen that evalControl requires a label. We also want to fall through to the Stmt on a true Boolean (at runtime), so the ifTrue parameter will be 1. The code for this production rule will therefore look somewhat like this:
short label= newLabel(); Expr->evalControl(label, true);

This will generate branching code that tests the Boolean Expr, falls through to the next code generated on true, and branches to the label on false. However, we have a new problemwe want the label to be directed to the code following the Stmt. How do we arrange for this label value (a short Chapter 14: Control Structures, page 398

integer) to be transferred to the right place? Note that a Stmt form can be quite complicated, and can recursively include more if-thens and other control structures. The only safe place to save this label value is on the parser push-down stack. We therefore need to construct something for the IfExpr nonterminal, and keep it there. Later on, after the Stmt material has been parsed, well find the IfExpr nonterminal and semantic information. Thatll happen when the IF_STMT production rule is triggered. So we essentially need to copy label into an IfExpr structure. This will be kept on the runtime stack, and reappear later in the IF_STMT production rule, where we need it. Well associate IfExpr with a Ceval object, and include a type short as one of that objects data members. (We could also have created a new class for this purpose, but extending Ceval seems easier). A new constructor will also be introduced:
Ceval(semType semt, short label);

so that creating this special kind of Ceval object is easy. The compiler code for the IFBOOLEAN production now looks like this:
short label= newLabel(); IfExpr= new Ceval(LABEL, label); // if FALSE goto label // if TRUE fall through; Expr->evalControl(label, true);

Lets now return to the IF_STMT production rule:


Stmt : IF IfExpr THEN Stmt #IF_STMT

We know that the target code generated by the time this rule is triggered will look like this:
; evaluate IfExpr as branching code ; branch to LABEL if FALSE, fall through if TRUE ; code for Stmt

We also know that the integer value of the LABEL is contained in the Ceval object IfExpr. We can fetch that value through the call
IfExpr->getLabel()

which is a function we added when the label value was attached to class Ceval. All we need to do is generate the following:
$L_nnn:

where nnn represents the integer found in IfExpr. Heres the compiler code for that:
short label= IfExpr->getLabel(); genLabel(label);

Example
Heres a simple example of a complete if-then statement as compiled through these production rules. The first line is the Pascal souce statement, and remaining lines are the generated target assembler:
; 120: cmp jne mov $L_90: if (b0) then b1:=false; BYTE PTR B0_051,1 ; generated by IFBOOLEAN $L_90 ; also generated by IFBOOLEAN B1_052,0 ; the Stmt, b1:=false; ; label generated by IF_STMT

Compiling an if-then-else Statement


A nave production rule for an if-then-else statement is the following:
Stmt : IF Expr THEN Stmt ELSE Stmt #IFE_STMT

As with the one-sided if-then statement, this is unsatisfactory. The Expr part will become an Chapter 14: Control Structures, page 399

unevaluated AST, and code for the two Stmt forms will be generated in sequence. We clearly need to interrupt the parsing process with some additional production rules. There are various ways of doing this, but we chose the following set of rules, which does the trick very nicely as well explain:
Stmt : IF IfExpr THEN Stmt Else Stmt #IFE_STMT IfExpr : Expr #IFBOOLEAN Else : ELSE #ELSE

Well use the #IFBOOLEAN rule that we set up for the one-sided if-then. The #ELSE rule will allow us to insert an unconditional branch (to get around the second Stmt), and a label L2 marking the second statement. This label (L2) will be the one provided in the IfExpr object, so we need to access that from within the #ELSE rule. Well also need a second label (call it L3) for the unconditional branch that skips the second Stmt. So here's the sequence of parsing operations: Token IF is scanned An expression (Expr in rule #IFBOOLEAN) is scanned and converted into an AST. (No assembly is emitted yet) Production rule #IFBOOLEAN is reduced. This permits us to evaluate the boolean expression and set up a branching label. The label is placed in the parser stack as the IfExpr nonterminal. Token THEN is scanned. Statement Stmt is fully scanned. Could be lots of assembly instructions generated from this. Token ELSE is scanned. Production rule #ELSE is reduced. At this point, we can generate a jmp instruction to get around the second Stmt that we know is about to be coded. (Assembler for the first Stmt has already been generated). We can use the label attached to the IfExpr object in the stack, and create another label to be attached to the Else in the stack. The second statement Stmt is fully scanned. Again, could be lots of assembly instructions generated. The production rule #IFE_STMT is reduced. Within it, the Else nonterminal carries a label that we need as a target that should be generated at this point. Everything else is complete, and any parser stack objects can be deleted. The generated code will follow this pattern:
branching code for the Boolean Expression ;(#IFBOOLEAN) ; fall through on TRUE, branch to L2 on FALSE code for the first Stmt ; (Stmt.1) jmp L3 ; to skip around the second Stmt (#ELSE) L2: ; (#ELSE) code for the second Stmt ; (Stmt.2) L3: ; (#IFE_STMT)

Weve marked the portions generated by each of the production rules, i.e. #IFBOOLEAN, #ELSE, #IFE_STMT. To implement this, we use the IfExpr semantics as in the previous if-then coding. New code is required in the ELSE production rule to access the label carried in the IfExpr object. Doing this requires a somewhat radical approach to accessing objects in the parser stackwe need to figure out just where IfExpr is in the parsing stack when the ELSE production is triggered. So we digress and discuss this important issue next.

Chapter 14: Control Structures, page 400

Accessing Deep Stack Objects


The general rule about objects in the LR parser stack is that they represent the roots of completed derivation trees. The parser stack in fact carries the front end of a viable prefix, and becomes the viable prefix on a reduce action. To see how this works, consider the Pascal statement
if (a<b) then a:=a+2 else b:=b+3;

Given the above production rules, the expression (a<b) is parsed first. It will appear in the parser stack as an AST rooted in the nonterminal Expr. The reserved word then will then be parsed to yield the nonterminal Then. The statement a:=a+2 is processed next, yielding the nonterminal Stmt. (No AST is built for statements, so only this nonterminal will be in the parser stack). Eventually, the reserved word else is picked up on the production rule
Else : ELSE #ELSE

if(a<b)then a:=a+2 else b:=b+3;

Stmt IF IfExpr Then Expr


(a<b)

current rule

Stmt

Else
ELSE

Stmt

THEN

AssignStmt
a:=a+2

AssignStmt
b:=b+3

completed portion

not yet completed

Let's pause on this production rule and see what the derivation tree looks like, also the parser stack. The derivation tree is shown above. The portions IF, IfExpr, Then, Stmt and the reserved word ELSE are on the stack. The second Stmt portion (correpsonding to b:=b+3) hasn't been parsed yet. ELSE will be on the stack top, Stmt beneath it, Then beneath Stmt, IfExpr beneath Then, and IF beneath IfExpr. Here's the parser stack at this moment during the parse: Stack Position Object in Stack TOS-5 unknown TOS-4 IF (token) TOS-3 IfExpr (Ceval* pointer) TOS-2 THEN (token) TOS-1 Stmt (null pointer) TOS ELSE (token) Notice that the token ELSE is at the top of the stack, as we expect. Beneath it are the preceding parts of the IFE_STMT production rule right parts. The whole IFE_STMT doesnt appear in the stack, since it hasnt been completely parsed yet. However, everything to the left of the ELSE has been parsed, and

Chapter 14: Control Structures, page 401

will be represented on the stack as either a token, a null pointer or a pointer to a tree-like Ceval object. In particular, we see that the IfExpr pointer should be at TOS-3. All this depends on certain conditions that we need to verify: The ELSE production rule is only used in the IFE_STMT production rule. If it appears in the right part of any other production rule, then we cant claim that this stack configuration will be correct. The stack integrity is maintained, even if some syntax errors have occurred during parsing. The error recovery system must be designed to maintain integrity. We cant determine what appears below TOS-4, in particular, whats at TOS-5, or even how many objects are in the stack. We can only infer the contents of TOS-4 through TOS. The stack object Cstack that supports the parsing stack provides function stackRef(int n) which returns the pointer at the stack location TOS-n. This returns a CstackElement pointer, which must be cast to the derived class Ceval*. (We have sufficient confidence in our operations that this cast is safe, dont we?)

Using the Deep Stack Pointer


We can use this analysis to figure out where the IfExpr object is--from the figure and stack table above, it's at TOS-3 when the #ELSE production rule is triggered. So heres the code we need in the ELSE production rule:
Else : ELSE #ELSE { // the first Stmt has just been coded // we're ready to launch the second Stmt short label= newLabel(); // where to go after the first Stmt Else= new Ceval(LABEL, label); string lbl; makeLabel(lbl, label); genCode("jmp", lbl); Ceval *IfBool= (Ceval*) stackRef(3); // points to IfExpr assert(IfBool!=0); label= IfBool->getLabel(); // where to go if FALSE genLabel(label); } ;

A label is declared as a short integer. This will be newly manufactured in the line
short label= newLabel();

We create a new Ceval object to hold this label, and stuff it back into the Else return value slot. This will be used later in the complete #IFE_STMT production rule:
Else= new Ceval(LABEL, label);

Function makeLabel converts a label integer into a string. Then the call
genCode("jmp", lbl);

generates an unconditional jump to this new label. This will skip around the second (ELSE) statement. The line
Ceval *IfBool= (Ceval*) stackRef(3);

fetches the previous label, created in the IfExpr production rule. Weve explained how the stackRef(3) works above. Although were pretty sure that this will work correctly, the assert verifies

Chapter 14: Control Structures, page 402

that this pointer is (at least) not zero:


assert(IfBool!=0);

To be really sure, we should (but didnt) also test that its semType is LABEL, like this:
assert(IfBool->getsemType() == LABEL);

We need to create a label marker with this, and function genLabel does just that. Lets return to the covering #IFE_STMT production rule and see what needs to be done there:
Stmt : IF IfExpr Then Stmt Else Stmt #IFE_STMT

When this is triggered, we will have carried out the branching evaluation of Expr, planted labels and an unconditional branch. All the code for the two Stmt forms has also been generated. All thats left is to plant a target label marker. This will be the target of the unconditional branch following the first Stmt, causing control to skip around the second Stmt. Heres the required code for this production rule:
Stmt : IF IfExpr Then Stmt Else Stmt #IFE_STMT { // generate label following Stmt.2 short label= Else->getLabel(); genLabel(label); // target following Stmt }

Notice how this fetches the label value from the Else nonterminal and generates a target label from it. Its clear that the set of production rules involved in this control structure are interdependent and must be carefully designed. We also need to be correct about the stack location of the IfExpr object when the ELSE production rule is triggered. The 3 used in stackRef(3) depends on the structure of certain production rules, and theres no symbolic way to relate this number to the parent rule.

Example
The following example Pascal statement can be found near the end of file pascal5\t17.pas. Weve added some comments to indicate the source of each of the assembler statements:
; 121: cmp jne mov jmp $L_91: mov $L_92: if (b0) then b1:=false BYTE PTR B0_051,1 ; $L_91 ; B1_052,0 ; $L_92 ; ; B1_052,1 ; ; else b1:=true; from IFBOOLEAN from IFBOOLEAN evaluation of first Stmt from the ELSE production rule from the ELSE production rule evaluation of the second Stmt from IFE_STMT

Compiling the while-do Statement


The while-do statement is very similar to the one-sided if-then. The pattern required is this:
; WHILE Expr DO Stmt L1: branch-evaluate Expr. fall through on TRUE, branch to L2 on FALSE evaluate Stmt jmp L1 L2:

As with the if-then statements, the nave production rule


Stmt : WHILE Expr DO Stmt #WHILE_STMT ;

is unacceptable. We can use the IfExpr production rule semantics to replace the Expr, but we also

Chapter 14: Control Structures, page 403

need to replace WHILE with a production rule so that we can generate the first target label (L1: in the pattern). Here are the relevant production rules. If you have followed the previous discussions, their intent should be apparent. These can be found in pascal5\pascal.grm:
Stmt : While IfExpr DO Stmt #WHILE_STMT { // generate JMP back to the IfExpr int label= While->getLabel(); string lbl; makeLabel(lbl, label); genCode("jmp", lbl); label= IfExpr->getLabel(); genLabel(label); // where to go if false: just past Stmt } ; While : WHILE #WHILE { // the Expr will come next int label= newLabel(); While= new Ceval(LABEL, label); genLabel(label); } ;

Compiling the repeat-until Statement


The pattern of target code for a repeat-until statement is as follows:
; REPEAT Stmt WHILE Expr L1: evaluate Stmt branch-evaluate Expr. fall through on FALSE, branch to L1 on TRUE

The production rule semantics are similar to the control statements done previously. The keyword REPEAT must be replaced by a nonterminal and a production rule so that we can generate the label L1. This can be kept in the parsing stack and referred to later in the production rules.
Stmt : Repeat StmtList OptSemi UNTIL Expr #REPEAT_STMT { int label= Repeat->getLabel(); Expr->evalControl(label, true); } ; Repeat : REPEAT #REPEAT { // a StmtList will come next int label= newLabel(); Repeat= new Ceval(LABEL, label); genLabel(label); } ;

Here, the #REPEAT production rule is reduced first. This creates a label, which will be carried in the Ceval object returned to the Repeat nonterminal. Function genLabel sends the initial label
L1:

to the assembler output. The StmtList clause is next generated, producing some sequence of assembly code. The Expr nonterminal will not result in code until we specifically call for it, which is done by the evalControl

Chapter 14: Control Structures, page 404

call. This function wants the label number and an ifTrue parameter. By choosing 1 for ifTrue, we set up branching code that will return control to the label if the Expr evaluates false, and falls through if true. This is exactly what we need for this control structure.

Compiling the for-do Statement


The for statement is fairly complicated to encode efficiently. We ask that you review the patterns described earlier. We first separate the TO case from the DOWNTO case, as separate production rules. This turned out to be easier than to try to distinguish these two cases within one production rule. Heres the Stmt production rule for the FOR-TO case. As before, weve replaced the keyword TO with a nonterminal ForTo and a production rule, which allows us to insert some special code within the flow of this statement. You should realize that the AssignStmt nonterminal is attached to a complete AST representing this statement. Although its evaluated, it isnt changed when this #FOR_STMT is reduced. We can therefore examine it to fetch the assignment statement variable for later use. The pointer varp in fact will point to a Ceval object describing this variable; it has to be an identifier.
Stmt : FOR AssignStmt ForTo ForUpExpr DO Stmt #FOR_STMT { /* AssignStmt is done once; its code comes out automatically At end of Stmt, we return to test the AssignStmt variable against the Expr value ForTo carries the label for the test ForUpExpr carries the label for exiting Here we need to generate an increment, the jump back and a label to get past Stmt, then a final decrement to restore the variable */ Ceval *varp= new Ceval(*AssignStmt->one()); Ceval *incrp= new Ceval(INCR, varp, CNULL); incrp->checkTypes(); incrp->eval(tINT); // generate an increment of the variable // unlink and dispose of this tree incrp->one()->unlink(); delete incrp;

We need to explain what this incrp business is about. We need to generate an increment instruction of the FOR variable just after the statement list, but we dont have an abstract syntax tree (AST) that describes that incrementation. It happens that the FOR variable could be various types, and sorting out the different types is rather painful. But eval knows how to do this. So we construct a new Ceval AST whose root is an INCR, through the line
Ceval *incrp= new Ceval(INCR, varp, CNULL);

We need to call checkTypes on the new AST, then eval. Once the evaluation is out of way, we can delete this AST, but not until we unlink the variable varp from it. Of course, eval needs to know how to deal with an INCR, but this is easily added to its switch statement.

The newtags Grammar Statement


The label INCR does not appear on any production rule. We need to get the Qparser software to generate one. That can be done in the grammar through the special clause
newtags=INCR DECR ...

which you can find near the top of the grammar file pascal5/pascal.grm. Chapter 14: Control Structures, page 405

Compiling the FOR Statement


We are now ready to generate a branch instruction back to the FOR testing instructions, to a label carried in the ForTo nonterminal:
int label= ForTo->getLabel(); string lbl; makeLabel(lbl, label); genCode("jmp", lbl); // return to test again

Heres the label for the exit line, following the FOR statement
genLabel(ForUpExpr->getLabel()); // label the exit line

Pascal requires that the FOR variable be decremented, so that it will carry the last value of the statement. We generate a decrementing AST to do this as we did for incrementation, then delete it. At the end, we can delete varp as well.
Ceval *decrp= new Ceval(DECR, varp, CNULL); decrp->checkTypes(); decrp->eval(tINT); decrp->one()->unlink(); delete decrp; delete varp; }; // varp

ForTo : TO #FORTO { short label= newLabel(); ForTo= new Ceval(LABEL, label); genLabel(label); } ;

The #FORUBOOLEAN production rule is what manages the FOR loop comparison testing. It has a back reference to the AssignStmt in order to fetch the variable. We then create a Ceval AST that describes the comparison we want, rather than try to duplicate all the messy code involved in this operation. The various unlinkings and deletions are designed to avoid memory leaks from these operations.
ForUpExpr : Expr #FORUBOOLEAN { Ceval *assignp= (Ceval *) stackRef(2); // the AssignStmt Ceval *varp= new Ceval(*assignp->one()); // var := Expr varp->unlinkSibling(); Ceval *boolv= new Ceval(*Expr); Ceval *comp= new Ceval(GTR, varp, boolv, CNULL); comp->checkTypes(); short label= currentParser->newLabel(); // exit label comp->evalControl(label, false); ForUpExpr= new Ceval(LABEL, label); comp->two()->unlink(); comp->one()->unlink(); delete varp; delete boolv; delete comp; } ;

The quality of the code resulting from these production rules and optimization techniques can be judged from some sample Pascal programs that contain some FOR statements of various kinds. An

Chapter 14: Control Structures, page 406

example is given earlier in this chapter.

Compiling the case Statement


The checking and code generation operations for the CASE statement are quite complicated. We've therefore chosen to embed them in a helper function genCaseStatement, which receives pointers to various structures built up in other production rules:
Stmt : CASE CaseExpr CaseOf CaseClauses OptSemi Otherwise { genCaseStatement(CaseExpr->getLabel(), CaseOf->getLabel(), CaseClauses, Otherwise); } ; #CASE_STMT

Of course, this production rule is triggered only after all of its components have been parsed and processed. So the genCaseStatement function is not responsible for evaluating the case expression (CaseExpr), or catching the labels of each of the statements. It's responsible for performing the case branching. Here's the overall plan: 1. The case expression is evaluated as an integer, left in register AX. This is done in the CaseExpr production rule. 2. A new branch label Lresolve is created. An unconditional jump to Lresolve is generated. At runtime, this will bypass all the statements. This label will later be resolved to code that will work out the case statement branch. 3. A label Lexit is created. This will mark the exit position of the case statement. 4. Each case-clause statement is assigned its own unique label LCn. Also, the source code labels are captured in a list and associated with the label LCn. These are carried in a linked list, and will be used by the branch code generator later. 5. The label LCn is written to assembler to form a symbolic label name for the following case statement. 6. Each case-clause statement is parsed and assembler code is generated for it. 7. After each case-clause statement, an unconditional jump to label Lexit is generated. 8. Upon reducing the CASE_STMT production rule, label Lexit can be resolved by writing Lexit: to assembler. 9. genCaseStatement can now be called, which will generate assembly code that branches to the appropriate case statement, or to the terminator label Lresolve. 10. The terminator label Lresolve is written to assembler as Lresolve:. This marks the following instructions. Operations 4, 5, 6 and 7 are repeated for each of the case-clause statements.

Case-clause Statement
A case-clause statement is one of the Stmt forms controlled in the case ... end form. One such statement is defined by the following production rule:
CaseClause : ConstList CaseColon Stmt #CCLAUSE { CaseClause= new Ceval(CCLAUSE, ConstList, CaseColon, CNULL);

Chapter 14: Control Structures, page 407

// Stmt is followed by a jump past the case stuff // We fetch the exit label object Ceval *exitLabelp= (Ceval *) stackRef(3); assert(exitLabelp); assert(exitLabelp->getsemType() == LABEL); short exitLabel= exitLabelp->getLabel(); string lbl; makeLabel(lbl, exitLabel); genCode("jmp", lbl); };

The ConstList part carries a list of constant numbers comprising the labels for this statement. We use the tree features of the Ceval objects to carry these as a linked list. CaseColon expands to a single colon character. We introduced a new production rule rather than just writing the colon into #CCLAUSE so that we could create a label, generate it as label:, and return the label value through the CaseColon expression:
CaseColon : ":" #CASECOLON { // A CASE statement will follow this // We create a label and mark this location with it short label= newLabel(); CaseColon= new Ceval(LABEL, label); genLabel(label); };

Notice that the returned object CaseClause is an AST containing (as children) the constant list and a label attached to this particular statement. This object will in turn become part of a linked list of CaseClause objects comprising the list of case-clause statements. When the #CCLAUSE production rule is reduced, the associated case-clause statement has already been generated as assembler. However, we've attached a unique label to its front end, and are prepared to pass that label back up to the covering production rule #CASE_STMT. We need to do one more thingacquire the exit label Lexit from the parsing environment, then generate the assembly code
jmp Lexit

To do this, we need to figure out where the exit label is located on the parsing stack. We are in the #CCLAUSE production rule, which has three right-most members. This rule is the CaseClauses nonterminal in the #CASE_STMT production rule. Now we have already reduced the CaseOf nonterminal in the #CASE_STMT, because it's to the left of the CaseClauses nonterminal. We have also placed the label we want in the CaseOf semantics, so it's just a matter of fetching it. It's at stack location 3 from the top, i.e. just below CaseClauses. Here's the code that finds that label and turns it into an assembler jmp instruction:
Ceval *exitLabelp= (Ceval *) stackRef(3); assert(exitLabelp); assert(exitLabelp->getsemType() == LABEL); short exitLabel= exitLabelp->getLabel(); string lbl; makeLabel(lbl, exitLabel); genCode("jmp", lbl);

We've put in two asserts to make sure that we've found the correct material through the stackRef(3). A few tests have verified that this works correctly, and our theory seems correct, so we have plenty of confidence in this source code. The rule #CCLAUSE takes care of steps 4 through 7 in our framework. Chapter 14: Control Structures, page 408

genCaseStatement
We are now ready to tackle genCaseStatement. This can be found in pascal5/evpars.cpp. It's about 140 lines long, much too long for us to discuss in detail. (But you ought to look at the source code to follow this discussion). The most interesting parameter in this function is the pointer clauses. This carries a linked list of all the case clauses. Each case clause has a linked list of its constant labels and its clause label. For example, suppose that our case statement looks like this:
case E of 5, 6: S1; 7, 8: S2; end;

Also, assume that the label for S1 is L_022, and the label for S2 is L_023. Then the clauses tree

statement S1 Clauses toSibling()

statement S2

toChild() 22 case statement assembler labels toChild() 5 6 case clause labels 7

toChild()

23 toChild() 8

structure will look like this: Each of the boxes in the above diagram is a Ceval object, which is derived from Ctree. Each Ctree object carries two pointers, one to a sibling, and another to a child. These pointers are shown in the diagram as a down-arrow (child) or a right-arrow (sibling). Thus Clauses points to an object representing the case statement S1, which points (through its sibling pointer) to statement S2. Each statement object has a child, which carries the case statement assembler label (the 22, 23 boxes). (The label value is carried in the label data member of the Ceval object). The assembler label is the label that will appear in the target assembler code. This label is chosen by the compiler. Only one per statement is chosen. The child of this child is the first member of a linked list of Pascal or case clause labels (the 5, 6, 7, 8 boxes). These labels are also carried in the label data member of the Ceval object, while the pointers

Chapter 14: Control Structures, page 409

are carried in the Ctree base class. The case clause labels are the labels appearing in the Pascal source code. This data structure is built directly through the production rule reduce actions. The reader is invited to examine them to see just how this structure is put together. For example, here's how the list of Pascal label constants is built from individual constant labels. A ConstList is either an IntConst or a ConstList , IntConst. Each IntConst is just a Ceval object that carries a label value. The latter recursive rule shows how a new IntConst object is appended to the list of an existing ConstList:
ConstList : IntConst ; ConstList : ConstList "," IntConst #CONSTLIST { ConstList= ConstList.2; ConstList->appendSibling(IntConst); defeatGConce(); };

genCaseStatement Operations
Given this clauses data structure, the genCaseStatement function can devise a strategy for assembling the case branching instructions and table. Here's the grand strategy. The reader may examine the code (found in pascal5/evpars.cpp) for details: 1. Count the statements and find the minimum label and the maximum label. This requires working through the statements linked list, also the lower linked list, looking at each of the Pascal label values. 2. Using the minimum and maximum label values, we allocate an array of char (labels) sufficient to hold the entire range. It's initialized to 0. We will use this to check for duplicate Pascal labels. 3. We walk through the statements linked list again. For each label L, we see if labels[L-min] is 1. If so, we've seen this label before, and can complain about a duplicate label value. Having checked it, we set labels[L-min] to 1. 4. labels can now be deleted. 5. A general strategy is decided upon. This is based on the relative size of the label range and the count of labels. If the range is quite large compared to the count, then we encode the statement as a test-and-branch sequence. Otherwise, we encode it as a branch table. (More sophisticated or mixed strategies might also be deployed). 6. If a test-and-branch strategy was chosen in step 5, we walk through the statements linked list yet again, this time generating instructions of the form
cmp je eax,N $L_M

Here, N is a Pascal label and $L_M is the assembler label assigned to this statement. At the end of this sequence, if all else fails, a branch to the otherwise statement is in order. 7. If a branch table strategy was chosen in step 5, a sequence of assembler code is generated that will access a branch table (to follow). This code will first test EAX against the minimum and maximum labels; if outside that range, a branch to the otherwise label will occur. If inside that range, then the minimum label is subtracted from EAX. EAX is multiplied by 4, making it a double word index. The base address of the branch table (another label) is added to EAX, making it a data-segment relative address. (The branch table will be in the data segment). It's pushed on Chapter 14: Control Structures, page 410

the runtime stack, and a ret instruction executed. Here's a typical code sequence as generated:
; (value of E is in EAX) cmp EAX,12 ; the largest label jg $L_012 ; address of the otherwise statement S4--too large cmp EAX,1 ; the smallest label jl $L_012 ; go to otherwise if too small sub EAX,1 ; form a 0-based index shl EAX,2 ; 4*AX to form a byte offset lea EBX,$L_092 ; address of the branch table add EBX,EAX ; address of the desired statement address jmp [EBX] ; transfer control to the desired statement

8. (Follow-on from step 7) The branch table is generated. This requires yet another pass through the linked list clauses structure. However, we first need to allocate another array, this time of integers, called slabels, large enough to hold the range of labels. This array will be initialized to the otherwise label value, by default. 9. (Follow-on from step 8) Another pass through the linked list structure will pick up the Pascal labels and use them as an index into the slabels array, setting that array element to the Pascal label value. When we're done, the slabels array is effectively our branch table. 10. (Follow-on from step 9) The slabels array is used to generate an assembler list of addresses. These have to be in DATA space, so we need a .DATA directive to start them off. Each entry is a 32-bit double word offset. We can use the symbolic statement labels, but each label needs the attribute OFFSET so that the entry will be correctly set to the appropriate offset. Recall that these values will be pushed in the runtime stack, then referenced through a ret instruction, yet the target must be in the code segment. The assembler knows that each label refers to a code segment location, and the OFFSET directive ensures that it will generate a code-segment offset. Here's what the first few entries of a typical table look like:
.DATA $L_092: dd dd dd dd dd dd OFFSET OFFSET OFFSET OFFSET OFFSET OFFSET ; start of the branch table; has to be in DATA space $L_008 ; label 1, to S1 $L_008 $L_008 $L_012 ; label 4, to S4, the otherwise case $L_009 ; label 5, to S2 $L_012

Since we've made sure that the table is never accessed with an out-of-range index, we only need to provide values for the in-range cases. The defaults (no explicit label) are directed to the otherwise case label, which may turn out to be the label of the statement following the case.

Summary
Control structures are a high-level language feature that guide the flow of control from one section of an algorithm to another. Control structures typically take the form of goto, if-then-else, while-do, repeat-until, for and case structures. These appear in the C, Pascal, Ada and java languages, among others. Compiling control structures to 80x86 assembler usually calls for an expression evaluation that sets certain CPU flags, such as the sign and zero flags, then using these with a conditional jump instruction.

Chapter 14: Control Structures, page 411

Evaluating Boolean expressions can usually be done in two fundamentally different ways: through the use of logical operators or structured branches. The choice should be made with regard to the language constraints and finding the most efficient code sequence. A special compiler function called evalControl can be designed to facilitate near-optimal generation of control assembler. Since we carry expressions in an AST, but not statements, the production rules for control structures usually need to be augmented in order to generate assembler labels and branches at the appropriate places. We show how to locate values carried by a nonterminal deeper in the parser stack than the current production rule shows. This trick is needed for certain of the control statements. Most of the control structures can be compiled using evalControl with relatively simple code fragments attached to the production rules. Exceptions are the for and case statements, which require more work. The case statement in particular requires attention to the density properties of the set of case labels, leading to a compiler-level choice of different strategies. In general it can be implemented in one of three different ways, or some mix of these three: as test-and-branch instructions, through a branch table, or through a binary tree test-and-branch scheme.

References
[1] J. McCarthy, Recursive Functions of Symbolic Expressions and Their Computation by Machine, CACM 4(4) 184/195, 1960. [2] J. McCarthy et al, LISP 1.5 Programmer's Manual, MIT Press, 1962. [3] Guy L. Steele, Jr, and Gerald J. Sussman, LAMBDA, The Ultimate Imperative, MIT AI memo No. 353, 1976. [4] Guy L. Steele, Jr., LAMBDA, The Ultimate Declarative, MIT AI memo No. 379, 1976.

Chapter 14: Control Structures, page 412

Chapter 15: Block Optimization


W. A. Barrett, San Jose State University nch15.doc ...to be written...

Chapter 15: Block Optimization, page 413

Appendix 1: A C++ Primer


W. A. Barrett, San Jose State University napp1.doc, vs. 2.0

Historical Note
C++ was invented by Bjarne Stroustrup of the AT&T company, who wrote its early definition and produced its first implementation in 1985 [1]. It has had predecessors as early as 1980, in the form of "C with classes". Stroustrup has continued to be active in the development of language features and in its standardization efforts. C++ owes much to C, which was introduced by Kernighan and Ritchie in 1978. It retains C as a subset. This was no accident. C continues to be a very popular programming language. It provides both low-level and high-level features and is sufficiently powerful to express operating systems, programming compilers, compiler libraries, real-time operating systems libraries, as well as commercial and business software. Many of these applications required the use of assembler prior to the development of C. C's greatest success lies in the development of the Unix operating system and the ease with which Unix has been ported to a wide variety of hardware platforms. Only a relatively small set of driver functions need to be written for a particular hardware platform in order to adapt Unix to that platform. The success of Linux is at least in part a tribute to the portability of Unix and its dependence on C. Stroustrup was able to write a C++ translator in C, which took C++ source code and generated C code. This was called the "cfront" compiler, and was used for several years during the development phase. It too was very portable, although debugging C++ code with it was difficult. The most portable implementation available now is from the Free Software Foundation as part of the Gnu software development package. The Gnu C++ compiler and its companion debuggers (gdb, mxgdb) have been ported to a large number of Unix platorms, including Linux. Retaining all C features in C++ was an important objective in the design of C++. That made it possible to retain a vast amount of software written in C, adapting portions of it to the new class concepts found in C++. A secondary objective was arriving at a language that permitted a reasonable compiler to generate high-quality code. Unfortunately, these objectives are not compatible with safetythere are many features inherited from the C world that undermine code safety. The name "C++" was coined by Rick Mascitti in the summer of 1983. It signifies the evolutionary nature of the language, i.e. the C language with significant additions. C++ is a complicated language, even when one disregards the huge runtime libraries and templates that have subsequently been developed by various vendors. Its complexity stems partially from Stroustrup's determination to retain all C features, but also from his determination to retain as much of the high runtime performance that programmers have come to expect from C. There are many other languages that support object-oriented programming, but nearly all fail to meet the high performance standards of C. Smalltalk and Lisp are examples of object-oriented languages that are supported by a runtime interpretive kernel. Compiling either of these to high-performance native code can be done, but only by sacrificing many of the high-level features that make these languages attractive. C++ continues to grow, albeit in small ways. There are several troublesome areas that the standards groups are wrestling with, but these are not likely to affect the casual user. Appendix 1: A C++ Primer, page 414

There are now many books on C++, ranging from tutorials to language reference manuals [2], [3], [4], [5]. Stroustrup's original book The C++ Programming Language [1] is in its third edition, and has over 900 pages. The reader is referred to them for more details. Our intention in this Primer is to present some of the central features of C++. We assume that the reader is familiar with C, so that we can focus almost entirely on the new class features of C++.

Class
A class is very similar to a C struct. It describes how an object is structured and used. In particular, a class has all of these attributes, many of which cannot be found in ANSI C: carries data members, i.e. ints, floats, structs, unions, pointers, and other class instantiations carries member functions, which are similar in style to C functions supports three protection levels for data and functions supports a special constructor and destructor function form supports overloading of function names supports inheritance supports virtual functions A class declaration looks very much like a C struct. Here's the pattern:
class classname { data members member functions };

The terminating semicolon is quite important. Leave it out, and the compiler will generate lots of error messages. Class declarations are normally placed in a header file, with the suffix .h. Any function declarations that are not made in the class definition are normally put in a code file, with the suffix .cpp. This is merely a convention. You can also write everything in a single .cpp file if you want to. These particular suffixes are important in using any C++ compiler or make, since these tools use the file suffix as a way of inferring what's in the file. Let's look at a specific example, a class that carries a complex number. Recall that a complex number has a real part and an imaginary part. Mathematicians write it like this: x + i y. The symbol i is the square root of -1. x is the real part, and y is the imaginary part. All four arithmetic operations (addition, subtraction, multiplication and division) are mathematically defined on complex numbers, as well as the transcendental functions, powers and roots. Each of these also yields a complex number. We'll illustrate one operation, addition, in the following example. We are using this only to demonstrate some properties of classes, not complex numbers.

Example
class complex { private: double re, im; public: complex(double r, double i) { re=r; im= i; }; // a constructor complex(double r) // float to complex conversion : re(r), im(0) {} complex(void) : re(0), im(0) {} virtual ~complex(void) {} // the destructor void add(complex &x, complex &y); void set(double r, double i) { re= r; im= i; };

Appendix 1: A C++ Primer, page 415

void print(void); };

Well describe all the features in the above class in due time. For now, note that: the data members are declared in the double re, im; line. the member functions are declared in prototype fashion after the keyword public. the three member functions complex are called constructors, and have a special role to play in a class the member function ~complex (consider the tilde ~ as part of the name) is called a destructor, and has a special role to play in the class. the class name appears after the keyword class. the data members are written into the class as though they were members of a C struct. These are re and im. Both are carried as double-precision floating-point numbers. Any legal C types, including other objects or pointers to objects, can be included as data. the constructor and destructor functions have the same name as the class, i.e. complex. the destructor is distinguished by the tilde character ( ~ ). (This suggests the "complement" of construction). there's more than one constructor. This illustrates name overloading. other member functions (add, print) are like prototype declarations of C functions, except that they lie within the pair of {} braces for the class. We say that these functions are bound to this class. all three constructors are declared inline. The thing to look for are a pair of {} braces following the function name and parameters. (These may have nothing in them, which means that no code needs to be executed). If the function name and parameters is followed by a semicolon ( ; ) instead, the function code must be defined elsewhere, usually in a companion .cpp file. We'll see how that's done shortly. the final semicolon after the class {} must be present. several features will be explained later, i.e. the "~" in the destructor, keyword virtual, the ":" syntax in the constructors, and the use of "&" in the add function prototype. The keywords private, public will also be discussed later.

Object
An object is an instantiated class. Instantiating a class essentially means that some memory space is allocated to the data members of the class. In that respect, an instantiated class is similar to allocating space for a C struct. Only the data space is allocated when an object is instantiated. The function code will always be generated by the C++ compiler, and linked, whether any objects are instantiated or not. Only one copy of the function code is required. (Code can't be changed at runtime, so there's no point in having more than one copy of it in memory). You can instantiate as many objects from a class definition as you like, or none. The member functions are designed to operate on the objects data members, and are therefore bound to the objects class. You must have an instantiated object to call any of its member functions. You can think of a class as a cookie cutter, and its instantiated objects as the cookies. A cookie cutter has a particular shape, which is analogous to the class data and function members. You only need one cutter of any particular shape to cut any number of cookies from a sheet of dough. Similarly, you can instantiate any number of objects from a single class definition. There are three ways of instantiating an object: at the global scope level, at a local scope level, or from the heap. Global objects, like C globals, can be accessed by any function. Local objects are bound Appendix 1: A C++ Primer, page 416

to the function in which they are declared, and can't be accessed outside that function. The heap provides global storage for a large number of objects, but the separate objects allocated from the heap are bound to pointers which may have a global or local scope level. The following example illustrates all three of these allocations in a single program. It also illustrates the use of the constructor functions.

Example
Here is a complete C++ program that can be compiled and executed. We've added comments in Italics to explain the components. If you write this into a file, don't include the Italic comments, only the bold-face Courier.
#include <iostream> using namespace std; // needed to support cout

This is the class definition given above. It's normally in a .h file:


class complex { private:

These are the data members of the object:


double re, im; public:

Three constructors are declared. One of these is always called when an object is created:
complex(double r, double i) { re=r; im= i; }; complex(double r) // float to complex conversion : re(r), im(0) {} complex(void) : re(0), im(0) {}

Here's a destructor for the object. This is called just before releasing the object's space. (It does nothing)
virtual ~complex(void) {}

Here are three member functions. Functions add and print must be declared somewhere else.
void add(complex &x, complex &y); void set(double r, double i) { re= r; im= i; }; void print(void); };

Here are definitions of the two functions that aren't declared inline in the class definition. These are normally in a .cpp file. Note the complex:: part in front. This states that the function is bound to the class complex.
void complex::add(complex &x, complex &y) { re= x.re + y.re; im= x.im + y.im; } void complex::print(void) {

Function print prints the data members of this object. cout is a printing function that takes one or more parameters through the operator <<. Here, it prints a left parenthesis, then parameter re, a comma, parameter im, a right parenthesis, and finally a line ending (endl).
cout << '(' << re << ',' << im << ')' << endl; }

Appendix 1: A C++ Primer, page 417

We declare two global objects. c4 is a global object, and pc is a pointer to a type complex object. The class declaration for complex must precede these in this file. That's why all header files (*.h) are included near the top of a cpp file.
complex c4(5, 7); complex *pc;

Here's a global function. It has no "name::" component, so it isn't bound to any object. We use it to illustrate how an object can be allocated from the heap, and that the allocated object survives returning from the function.
void fcn(void) { pc= new complex(1,2); }

The function main is the first one called in a program suite, just as in C. However, the global objects c4 and pc are allocated before any of this code is executed.
int main(void) {

This shows that object c4 is set up before main is called:


c4.print(); // will show (5,7)

Create three local objects. Notice that declarations and statements can be intermixed in C++:
complex c1(22.0, 13.5); complex c2(15.0); complex c3;

Perform an addition with the add function:


c3.add(c1, c2);

Show the results of the addition:


c3.print(); fcn(); // will show (37.0,13.5)

Call the global function fcn. This creates a new object bound to pointer pc, and initializes it: Show the data in the object:
pc->print(); // will show (1,2)

Delete the object pc:


delete pc; return 0; // (end of the program)

Compiling, Linking, Executing and/or Debugging this Program


If you'd like to compile, link and execute this short program, follow these instructions. Use a technical editor to write the program code into a file. Be sure to copy only the boldface Courier material, not the Italic comments. Call your file scomp.cpp. What you do next depends on your platform: Visual C++: Open a new project. Choose the "Win32 Console application". Select a directory and a project name. After this is opened, choose File/Open. Find or write your source file scomp.cpp. Choose Build/Build scomp.exe. This should compile and link your program. When finished, execute it through Build/Execute, or use one of the menu buttons. The executable file scomp.exe can be found in the Debug or Release subdirectory, depending on which mode VCC is set to. You should compile

Appendix 1: A C++ Primer, page 418

under the Debug option set this under the Build/Set Active Configurations menu. You should see a Release line and a Debug line. Select the Debug line, then the OK button. Rebuild the file if necessary. You can execute this program from within the Visual C++ vs. 5 framework, provided that the build was successful. Find the icon that looks like this:

and click on it to run the program. A DOS window will pop up, display a few lines, then disappear. To hold this window until youve viewed it, you can place a breakpoint on the last line of the main function. In the left panel (Class View/File View) select File View, then find the scomp.cpp file and open it as an editor window. Choose the last line (a return) and set the cursor somewhere in that line. Now look for the hand icon, and click on it.

You should see a red dot appear next to the selected line in the editor. Now when you execute the program, it will run to the position of the red dot, then stop just before executing that line. A yellow arrow should appear over the red dot. The DOS window will vanish, but you can bring it back by looking for it along the bottom lineit should be labelled scomp and have an MSDOS icon. When you bring it up, itll show the program output. You can also execute this file from a DOS window. You need to cd to the Debug directory of your project. It should contain a file scomp.exe. Then just execute this executable, like this: scomp. The programs output will appear in the same DOS window, and return with a DOS prompt. (You cant debug the program this way, but you may want to use little text-based programs compiled this way). Debugging with Visual C++: This is easy. Start by bringing up one of the CPP source files in an editor. You can then plant breakpoints where you feel you need to inspect your program as explained above. Choose one just ahead of a suspected problem area. If your program crashes, try running it with no breakpoints. When it stops, you may be in a system library function that detected an area. The Context window will show you the sequence of function calls that led to the crash. Scroll through those until you recognize one of your functions, then click on it. Youll see an edit window come up with a pointer to your source line that caused the problem. Of course, the error may not be in that line. It could be any number of things, but will usually be related to misuse of a pointer. You might get some clues from the system library function source that may come up on a crash. For example, an invalid string pointer will likely crash in a strcmp or strcpy function, not in your source. Use the Content line in a window near the bottom-left to view the function calls that led up to the crash. Click on the arrow to the right of this window to bring up the function call list the most recently called function will be on the top.

Appendix 1: A C++ Primer, page 419

Unix with the Gnu Compiler: You should be operating from a Unix shell. If you don't know what this looks like, or how to get one, ask someone familiar with your platform. A shell accepts commands with parameters. We'll assume you are in a directory that contains your file scomp.cpp. Then type this:
g++ -g scomp.cpp -o scomp

The result should be an executable file scomp (no suffix). The -g asks the compiler to add debugging tables to your executable file. See the next section for tips on debugging your program. g++ will accept any number of files with the suffix .cpp or .c. You don't need to list any .h files; they are normally included with your cpp and c files. the -o scomp names the resulting executable file. Execute it by naming it, like this:
scomp

Here's what you should see from the execution. These come from the cout printing directives in the program:
(5,7) (37,13.5) (1,2)

Debugging in Unix with the Gnu Debugger: Learning a few debugging operations will save you a lot of time when you work on some larger programs. You should be operating from a Unix shell. Your executable program should have been compiled with the -g option (essential!). The Gnu debugger is called gdb, and should be installed, ready to use. Type this:
gdb scomp

This is a command-driven debugger, not a windows based debugger. You need to learn a few commands to proceed. Heres a little list. Command Description set Enter command-line arguments after set args, just as you might from the command
args b n p e l r c s n q

Set a breakpoint. N can be a line number or a function name Print an expression e. The expression is in C++ syntax List several source lines. The current line will be near the middle of this list Run or restart the program. It will stop on a trap, when finished, or when the first breakpoint is seen Continue the program. Type this after youve stopped at a breakpoint Single-step, a line at a time, stepping into any function call Single-step, a line at a time, but execute functions without stepping into them Quit the debugger

To get started in scomp, try this: Set a breakpoint on main with b main. Run with r. It should stop on the first line in main. (Any global constructors will have been run before this breakpoint is seen. If you have problems with them, gdb may stop on a crash in one of them. You can set breakpoints in the constructor function to check out the difficulty).

Appendix 1: A C++ Primer, page 420

List the current source environment with l. Try single-stepping with s. This will take you into function calls. Try printing some variables. You can use compound names like c1.x, or just c1 to view the whole object. Quit with q. You can learn more commands by typing help at the command prompt. The commands are classified by category, so you need to try one of the categories to learn more about the commands.

Object Allocation
This example (see above) illustrates each of the common ways of allocating memory space for an object. Object c4 is allocated as a global. It's therefore accessible in any of the functions, whether they are member functions or not, including main. Object pc is a pointer, which is also accessible in any function. However, the operator new must be used to allocate space and bind the pointer to that space. The space will persist until delete is called somewhere later. Notice that new is called in function fcn, but that delete is called in the main program. (You don't really need to deallocate space with delete; all heap space will be reclaimed when the program exits). Object c1 is allocated as a local variable in the main function. Local variables remain "alive" for the duration of the runtime function, but their memory space is reclaimed upon an exit from the function. They are also not statically accessible outside the function. This object is declared to be type complex, which is understood to be a class. It carries two parameters, and that fits the first constructor of the class complex. Its real part will therefore be 22.0 and its imaginary part 13.5. Object c2 is also allocated as a local variable. Its parameters fit the second constructor function. Its real part will be 15.0, and the imaginary part 0.0. Object c3 is allocated as a local variable. Since there are no parameters, it must fit a constructor function with no parameters, i.e. the third constructor function. Its real part will be 0.0 and its imaginary part 0.0.

Calling a Member Function


Look at object c3, allocated in the main example above. The member function add must be called like this from main:
c3.add(c1, c2);

as shown above. What's happening here is that the add function is bound to the object c3, which carries the data values (0,0) just before the call. When add is called, it can access c3's data values, changing them if necessary. It's as though c3 were passed to the function as a parameter. Instead, c3's parameters are directly accessible to its member functions as though they were declared as globals. add in fact changes re and im of the c3 object to the sum of the values found in c1, c2. That is what we meant when we said that a member function of a class is bound to an object of that class. The object c3 becomes bound to the add function, which can then access the data and function members of c3 in its operations. The parameters c1, c2 that are explicitly passed to the add function can also be accessed in the function, but as parameters. Of course, they carry different data values. add must reference them using the compound names x.re, x.im, y.re, and y.im, as shown in the code for add. For a pointer to an object, such as pc in the example above, the call would look like this:
pc->print();

Appendix 1: A C++ Primer, page 421

The idea is exactly the same. print may now access the data members of the *pc object (the object pointed-to by pc) as though they were globals. Thus in the print code, we see a reference to re and im. These are the data members of the *pc object.

Data Members
The line
double re, im;

describes the data members of this class. Every object instantiated from this class will carry these two numbers.

Constructors
The line
complex (double r, double i) { re= r; im= i; } // a constructor

declares a constructor for the class. This constructor is invoked (out of the three possible ones) when two parameters are specified, as in this example:
complex c1(22.0, 13.5); // uses the first constructor

Notice how the class name is used as though it were a typedef in C. The name c1 now refers to a particular object instantiated from this class. In C++, in addition to allocating space for the object, the objects constructor is called. By adding the parentheses and parameters, we cause the first constructor to be called; it sets the data member variables re and im to the values passed as parameters. So we end up with an object containing the complex number 22.0 + 13.5j. If we had more work to do on the newly constructed object, we could write code into the {} section. Also notice how a constructor has no return type. Nothing can be returned from a constructor; its used primarily to set up the internal state of the object. This constructor illustrates inline code declaration. The syntax for this declaration is exactly like that for a complete C declaration of this function. You can write any code inside the {} braces. Well see how the code can also be written in a separate file shortly. The line
complex (double r) : re(r), im(0.0) {} // another constructor

can also be used to construct a complex number object. Here, only the real part is specified in the construction; the imaginary part is set to zero. But notice that some special alternative syntax is provided by C++. You can list the data member names and specify their initial values, like this:
: re(r), im(0.0)

They can also be listed inside the {} form, if you prefer that, as we did for the first constructor. There's a good reason for having this alternative initialization method, as we'll explain later. By the way, this form can only be used with constructors. The line
complex (void) : re(0.0), im(0.0) {} // a third constructor

is a third constructor. This one just initializes the number to 0+0j. It will be invoked in an object declaration like this:
complex c3; // uses the third constructor

This makes the declaration of an object with void parameters similar to a C declaration of a simple variable or a struct. In C++, the void constructor is used to initialize data members, whereas in C, no constructor is called automatically upon a declaration.

Appendix 1: A C++ Primer, page 422

Do not instantiate an object with a void constructor with parentheses, like this:
complex c4(); // wrong!!!

Although this appears to be a reasonable alternative to the instantiation without parentheses, it can be ambiguous under certain circumstances. For a more detailed discussion of this point, see reference [1].

Default Constructor
A default constructor will be provided by the compiler only if you dont write any constructors. The default constructor takes no parameters and executes no code, except to construct objects embedded in this class and to call the constructor for any base class. (Both these operations are hidden and require no explicit coding.) If you write even one constructor, with or without arguments, no default constructor is supplied by the compiler. This can mean that no void constructor exists for some class. The compiler may discover a need for a void constructor elsewhere in the code, and complain about its absence. This error message can be confusing to the novice its cure is to provide a void constructor in the appropriate class.

Destructors
The line
virtual ~complex(void) {} // a destructor

is a full declaration for a destructor. It has the same name as the class, except that the symbol ~ precedes the name. Like a constructor, a destructor has no return type. It also can not carry parameters. (This means that no class can have more than one destructor. [Why not?]). The keyword virtual used here should always appear on a destructor. Well explain why later. The code for a destructor may appear in a companion .cpp file, as with other member functions. Here, we've written it into the class. But nothing needs to be done, so the {} contains no code. When the destructor does nothing, you can safely just leave it out of the class completely. A default destructor is always provided by the compiler if you don't provide one, and it effectively does nothing. Similarly, a default constructor will always be provided if you don't provide at least one, and it will be like a constructor with no parameters that does nothing. You'll want to write a destructor if your object has set something up that needs to be undone when the object is deleted. For example, you might want to open a temporary file in the constructor, and make sure it's closed when the object is closed. The destructor code is called whenever the object is to be deallocated. If the object was allocated from the heap through new, then the destructor will be called through delete. If the object was allocated as a global, the destructor is called just before the program terminates execution. You can always depend on this happening even if the program is terminated through an exit somewhere deep in your code. That's a very useful feature, since you may only be writing one module of a large system, and you would otherwise have no other way of forcing some code to be executed before the program terminates. If the object was allocated from the runtime stack (in a function), the destructor will be called just before the function returns. The point is that the C++ compiler pays close attention to each active object, regardless of how it was allocated, and makes sure that when the space for an object disappears, its destructor is also called. The only exception to this rule is for objects allocated from the heap through new. You must make sure that delete is called on each of them if you must have their destructor run. The compiler and operating

Appendix 1: A C++ Primer, page 423

system are not configured to keep track of heap objects. Only the programmer can do that.

Inline vs. non-Inline Code


Inline code is essentially copied in place for each object instantiation. So there will be as many code copies as there are constructors appearing the source file. Non-inline code is only generated once, as a callable function. Each constructor is linked to that code through a function call at runtime. It should be clear that if the function code is large, it should be declared non-inline. That means that a function call and return is needed at runtime, but only one copy of the code has to be supported in memory. If the function code is short, then an inline definition probably should be used. At runtime, there will be many copies of this code, but the performance will be better since no function call and return needs to be executed. One disadvantage of an inline function is that some debuggers are unable to display the data members of an inline function call. So you should keep inline functions short and simple, otherwise you will have some problems diagnosing a bug. You can use the keyword inline with a separate function declaration, if you want that function to be treated that way. Writing the function directly in the class definition always makes it inline.

Member Functions
The line
void add (complex &x, complex &y); // a member function

is a member function prototype declaration. Its a prototype since no code is provided. (Notice that it's terminated with a semicolon rather than a {} pair). This will require a full declaration (with code) later in this file or in a companion .cpp file before the program can be linked. See the declarations for add and print in the above example. The member function declaration is written as any other C function might be written, except that the function name is qualified by the class name. That is, you need to write complex::add and not just add for the function name. This tells the compiler that this function is a member function of the complex class, and not just some global C-style function. Also, the function parameters in the full declaration must exactly match those in the class declaration. So here's the full declaration for complex::add:
void complex::add(complex &x, complex &y) { re= x.re + y.re; im= x.im + y.im; }

A new operator appears in this, ampersand ( & ). Ampersand is normally used in C to obtain the address of some variable. That use is supported in C++, but that's not how it's used here. In C++, ampersand can appear in a formal parameter declaration, as used here. It means that the marked parameter is passed by reference, which is a new way of passing parameters to a function. To understand this and how parameters are passed in C++, read on.

Pass by Value
C has always supported pass by value. In this, the actual parameter (parameter appearing in the call) is evaluated, and a copy of its value is pushed into the runtime stack to serve as the formal parameter variable (parameter appearing in the declaration). The function works on that copy, not the original Appendix 1: A C++ Primer, page 424

actual parameter. For example,


void myfnc(int k) { k = 22; // this works on the copy, not the original value } void main() { int value= 55; myfnc(value); cout << value << endl; // prints 55, not 22 }

In this example, function myfnc is called in function main. The actual parameter value is passed to the function by copying its current value to a temporary location in the runtime stack. Inside the function, the copied value is referred to by the formal parameter name k. Its value is 55 just before the line k=22 is executed. Just after that line is executed, the copy has the value 22, but the original value is still 55. That's why the cout prints 55, not 22.

Pass by Pointer Value


This is also a pass by value, but we pass a pointer to our object instead of a copy of the object. The function can now change the original value. Heres an example:
void myfnc(int *k) { *k = 22; // this changes the original value } void main() { int value= 55; myfnc(&value); cout << value << endl; // prints 22 }

Here, the call myfnc passes a copy of a pointer to object value. (That's what the ampersand operator & does). Inside function myfnc, it's formal parameter is declared as an int*, that is, a pointer to an int. The assignment statement
*k = 22;

then dereferences the pointer k, setting the object's value to 22. After returning from the function call, the cout will print 22, not 55. In more technical terms, the int value has some address in memory, say 0x5566. That address is passed to myfnc, not the value in that memory address, which is 55. In myfnc, the notation *k means to access the memory at the address carried by k, i.e. k carries an address, not a value. With *k on the left side of an assignment, the operation causes the integer at memory address 0x5566 to be changed to 22; the previous value is lost.

Pass by Reference
This is a new C++ feature. Heres an example:
void myfnc(int &k) { k = 22; // this works on the original value } void main() { int value= 55; myfnc(value); cout << value << endl; // prints 22 }

Appendix 1: A C++ Primer, page 425

Notice how this example looks almost exactly like the pass by value example syntactically. The only difference is the & in front of the formal parameter. That changes the functions access to affect the original variable, not a copy of it. In fact, pass by reference is implemented exactly like a pass by pointer value. The call
myfnc(value)

is implemented by the C++ compiler by copying the address of value to the runtime stack. That address is then used inside myfnc to service all references to the formal parameter k. The compiler knows that it needs to do this by the declaration
int &k

as the formal parameter of myfnc. In our complex number example, the add function is written like this:
void complex::add(complex &x, complex &y) { re= x.re + y.re; im= x.im + y.im; }

Notice how the function can access the internals of the two complex objects through the dot notation, not the arrow notation. In fact, since the two objects passed are not changed, we could equivalently write this without the ampersands; we would then have a pass-by-value situation. And, since the passed parameters aren't changed in the function, it's better to write the declaration like this:
void complex::add(const complex &x, const complex &y)

The const attribute tells the compiler that we don't intend to change either of the variables x, y, even though they are passed by reference, and we could if we wanted to. (const is discussed further later on). Also notice how we dont need to have function add return its sum, or provide a reference parameter to receive the sum, as would be required in C. add is bound to its object, which contains the data values for a complex number. These data values are referred to by re and im in the code of any member function, so thats where the sum goes.

Is Pass-by-Pointer the Same as Pass-by-Reference?


At the machine level, both of these pass a pointer as an actual parameter. The function can then change the value pointed-to. So the two mechanisms appear to be the same. There is one important difference, however: In pass-by-reference, the "pointer" can never be NULL. From the perspective of the function, a reference variable always exists. It's not possible to pass NULL by reference. This also means that while the function can legally call delete on a pointer variable, it cannot do so on a reference variable. With a pointer parameter, the parameter may be NULL. So any function should (for safety's sake) always test the pointer for NULL before using it in any dereference operation. Note that the reference parameter can be passed to other functions, or used in a recursive call, using any of the three forms of parameter passing.

Best Practice
Regarding parameter passing and function return values, I recommend the following. (This isn't always followed in the student compiler source code and examples in this book): If you want a function to receive a variable by reference, but don't want the function to change it, pass it as a const type&. This passes it by reference, which is good for performance, but does not permit the function to change it.

Appendix 1: A C++ Primer, page 426

If you want the function to change a variable, pass it as type&. The function can then change its value. Note that the variable must exist to be passed this way. If you have a pointer to something, and want the function to check whether its NULL or not, pass it by pointer reference, i.e. type* or const type*. In either case, the pointer may be NULL. With const, the function is forbidden to change the variable pointed-to, of course. Passing a large object by value, e.g. bigtype variable, can always be done, but is expensive. Runtime stack space is needed for each such pass, and the object must be copied to the stack. You won't notice any difference in performance for small objects, or large objects that are only passed a few times. But if your function is recursive or is called many times, you should reconsider this strategy. Most of the time, passing the object as a const by reference is equally effective and much faster than passing by value. Pointers are usually always passed by value. But you can pass a pointer by reference, if it's necessary for the function to change the pointer, not just the thing pointed-to, like this:

fcn(double*& value) { double *mypointer= new double[25]; *value= 25; // dereferencing the pointer value= mypointer; // setting the pointer } double *p; // declaration of the pointer fcn(p); // passing it by reference // on return, p points to 25 doubles allocated in fcn

Function Return Values


In C++, a function can return any of these: a simple value, by copying the "return" value to the recipient of the function call, a pointer to an object, again by copying the pointer to the recipient, or a reference to an objects. This is analogous to returning a pointer, except that the recipient object receives a reference to the object returned in the function. For pointer and reference returns, the thing pointed-to must be valid after the function has exited. Returning a pointer to a local variable or a value parameter of the function is a BIG MISTAKE, since those are typically allocated on the runtime stack, and become invalid once the function returns. You won't notice this mistake at runtime until some more functions are called, at which time a mysterious crash may occur. Similarly, returning a reference to a local variable or formal parameter is also a mistake, one that most compilers will not warn you about. You can safely return a pointer or reference to a global variable, or to a static variable (these are effectively global), or to a heap variable (allocated by new), or to a referenced variable passed as a formal parameter.

Putting it All Together


Heres what a complete .h file would look like for class complex:
// complex.h #ifndef COMPLEX_H #define COMPLEX_H // This class definition would normally be in a header file (.h)

Appendix 1: A C++ Primer, page 427

class complex { private: double re, im; public: complex(double r, double i) { re=r; im= i; }; complex(double r) // float to complex conversion : re(r), im(0) {} complex(void) : re(0), im(0) {} void add(complex &x, complex &y); void set(double r, double i) { re= r; im= i; };

We've changed print slightly by adding an optional parameter indent. You can still call print without a parameter, in which case, indent is the empty string. If you call print with a string, that gets printed first.
void print(const string& indent= ""); }; #endif

The covering #ifndef #endif makes it possible to include this file several times within the same compilation. Every .h file should be covered in a similar way. The general idea is that a particular .h file might be included several times in the compilation of some file. C++ doesn't permit a class definition to appear more than once, so we need to make sure that if this file is included more than once, only the first has any effect. The #ifndef #endif is seen by the compiler only if the preprocessor variable COMPLEX_H is defined. It isn't defined until the first appearance of file complex.h. After that, it is defined, and subsequent includes of this file are effectively ignored. Notice that anything appearing outside the #ifndef #endif is included each time. Since some of the member functions do not contain inline code, we need to define them. We also need a function main so that we can do something with our class. Heres part of a simple .cpp file to do that. We've added some comments to explain the features:
// complex.cpp // A complete complex number example

The iostream.h file is needed to support cout, cin, and other iostream functions.
#include <iostream> #include "complex.h" using namespace std; // supports cout

We must include complex.h, because it contains the all-important class complex { } definitions

Object gc is declared at the global level. It uses the default (void) constructor
complex gc; // global complex object // global pointer to a complex object

Object gcp is a pointer to an object. We set it to 0 here to initialize it.


complex *gcp= 0;

This is a full declaration of the add member function of class complex. Notice how "complex::" links the name "add" back to the class definition. The return type and parameters must be compatible with the class definition.
void complex::add(complex &x, complex &y) { re= x.re + y.re; im= x.im + y.im; }

Appendix 1: A C++ Primer, page 428

Here's the declaration of member function print. It uses cout to print the real and imaginary parts. We've added an indentation that's printed first, as explained above.
void complex::print(const string& indent) { cout << indent << '('; cout << re << ',' << im; cout << ')' << endl; }

Notice that weve provided a full declaration for the add and print functions. As member functions, the class specifier complex:: is needed so that the compiler will associate these functions with the corresponding prototypes in the class definition. The compiler will complain if: a function is found in this .cpp file that does not appear in the class definition, or if this function does not agree with the class prototype with respect to all parameter types, and the return type. Most C++ compilers will not complain about a function prototype found in the class that is never fully declared. Such a function is assumed to be declared in some separate file. External calls are resolved by the linker, which is also supposed to keep track of the parameter types. If the linker finds a call to a function, but no function, it will complain and refuse to generate an executable file. This is a feature. It makes it possible to set up class definitions with a complete set of member functions, but test the system by writing and testing a few functions at a time. We earlier reviewed the object instantiations in main, and described how the member functions are called.

Inheritance
Classes support inheritance, as we mentioned earlier. This means that a class B can draw upon data and function members belonging to another class C, by inheriting these attributes. Inheritance operates recursively, thus C can inherit from D, which could inherit from E, etc. The inheritance relationship cannot be circular, i.e. B cannot inherit from C which inherits from B. In C++, a class can inherit from several different parent classes; this is called multiple inheritance. A parent class is called a base class. The classes inheriting from a base class are called derived classes. A base class is usually ignorant of any of its derived classes. However, every derived class knows its base classes exactly. Figure 1 shows an inheritance diagram, in which A and B inherit from C, and C inherits from D. In general, this means that an object of type A can make use of data and member functions found in C and D. So can B. However, A can't use the data and functions of object B, nor can B use anything in A.

Example
D
Suppose we have a vehicle class. A vehicle might describe a car, an airplane, a boat, or anything else that moves. Vehicles can be said to have a weight, a velocity and a fuel consumption, and more. We might declare a vehicle class like this:
class vehicle { public: float weight, velocity, fuel_consumption;

A
Fig. 1. Simple inheritance

B
Appendix 1: A C++ Primer, page 429

};

Lets now specialize the vehicle class. In particular, a car is a kind of vehicle. It has all the properties of a vehicle, but there are also some special propertiess that not all vehicles have. In particular, a car has a steering wheel, but an airplane doesnt. When we describe the attributes of a car, it doesnt make sense to just copy all the attributes of a vehicle into the car class. In C++, its better form to write a class for car that inherits vehicle, like this:
class car: public vehicle { public: float steering_wheel_radius; };

This says that a car is a kind of vehicle, and in particular, theres a property steering_wheel_radius that a car has, but a more general vehicle doesnt have. This also says that a car object can make use of the data members and functions of its parent vehicle class, without copying them into the car class definition. We can instantiate either a car object or a vehicle object. Whats the difference? Simplewhen we instantiate a vehicle, the object contains only the three data members associated with that class, i.e. its weight, velocity, and fuel_consumption. It would not contain a steering_wheel_radius. When we instantiate a car, we obtain an object that contains all four of these parameters, the three associated with a vehicle and the additional one associated with a car. Figure 2 shows how memory space is allocated for a vehicle object (on the left), which only contains three values: weight, velocity, and fuel_consumption.

weight velocity fuel_consumption Increasing addresses

weight velocity fuel_consumption steering_wheel_radius Instantiated object car

from vehicle

Instantiated object vehicle added by car class

Fig. 2. Memory space required by a class object vehicle (left), and the derived class car ( i ht) An object of the derived class car (shown on the right in figure 2) contains space for these three values and the value steering_wheel_radius. Notice that the values for the derived class appear below those of the base class in the object, i.e. at higher memory addresses. This implies that a car object's data can be accessed as though it were a vehicle object, since the data offsets are the same whether the object is class vehicle or class car.

Base Class Reference by Derived Class


A member function of vehicle can be accessed by a car object in a similar manner to data members. For example,

Appendix 1: A C++ Primer, page 430

class vehicle {public: float weight, velocity, fuel_consumption; float weightInPounds(void) { return weight; } }; class car : public vehicle { // etc. };

Then the car object myCar can call the member function weightInPounds like this:
float w; car myCar; w= myCar.weightInPounds();

Base Class Reference to Derived Class Members


Lets look at another example that allocates some objects, then refers to their data members. (This needs a #include for the car and vehicle class definitions)
int main(void) { vehicle myvehicle; car mycar; car manycars[20]; car *ptrcar;

// space // // space // space

for one vehicle object space for one car object for an array of 20 cars for a pointer to a car object

Now let's see how to refer to the data in these objects. This sets the weight member of myCar. Note that although weight technically belongs to a vehicle, since car is derived from vehicle, a car object can refer to the data members belonging to the vehicle class.
myCar.weight= 4500;

This sets the velocity member of the fourth car object of the manycars array.
manycars[3].velocity= 60;

This declares a new car object instantiated from the runtime heap. ptrcar becomes a pointer that points to it.
ptrcar= new car;

We can then set the weight data member through the next line, which dereferences the pointer, and selects the weight data member of the object.
(*ptrcar).weight= 3600;

The previous line can also be written like this, using the C shorthand for a pointer dereference:
ptrcar->weight= 3600; return 1; }

The point of this example is to show that a derived class object (in this case, car) can also be accessed as though it were an object of the base class (vehicle). The compiler permits this as part of the inheritance services, and it's made possible by the arrangement of data in the object as shown, i.e. base class data precedes derived class data in the object in memory order.

Data Access Protection


Weve used the keyword public in all the classes defined thus far. There are actually three levels of access to data members and member functions of a class, as follows:

Appendix 1: A C++ Primer, page 431

private: only the member functions of this class can access the data. This is the default. protected: the member functions of this class and all derived classes of this class can access the data. public: any function can access the data, whether part of some other object or an ordinary C global function. Of course, any data access still requires an explicit or implicit object name reference. You can only omit the explicit object reference when the data members or functions are referred to within its own or a derived class of this object. Let's look at an example. (This is not a complete program).

Example
class vehicle { float fuel_consumption; // private by default protected: float weight; public: float velocity; void func1(void) { fuel_consumption= 3; } // this is OK }; class car : public vehicle { public: float steering_wheel_radius; void cfunc1(void); }; void car::cfunc1(void) { fuel_consumption= 3; // ILLEGAL because this data member is private. weight= 1550; // OK because this data member is protected. } main(void) { car mycar; mycar.fuel_consumption= 15; // ILLEGAL mycar.weight= 1550; // ILLEGAL mycar.velocity= 65; // OK }

Discussion of Example
The protection provided by private or protected refers to the environment in which the reference appears, not the object used for the reference. For example, fuel_consumption is a private member of the vehicle class. This means that it can only legally appear in a member function of the vehicle class. It would be illegal in a member function of the car class, or anywhere elsefor example, in the main function, which is not a member function of any class. So the line
mycar.fuel_consumption= 15; fuel_consumption= 3; // ILLEGAL

that appears in the main function is illegal. So is the line


// ILLEGAL because this data member is private.

that appears in the member function car::cfunc1. Now consider weight, which is a protected member of the vehicle class. This may legally appear within a member function of class vehicle and any derived class of vehicle, i.e. car. Of course, it is still

Appendix 1: A C++ Primer, page 432

bound to an object of that class. The line


weight= 1550; // OK because this data member is protected.

is therefore legal, and does not require an explicit object referencethe object is assumed to be this object. Finally, consider velocity, which is a public member of the vehicle class. This may legally appear anywhere. If this name is used within a member function of vehicle or one of the vehicle derived classes, it doesnt need an explicit object designator. However, within the main function, an explicit object designator is required, hence the line
mycar.velocity= 65; // OK

is legal.

More About Constructors and Destructors


Every class has a default constructor and destructor generated by the compiler, if you dont declare any. The default constructor will allocate an object with its data members left uninitialized. (That's not quite right -- Class objects in a class will be initialized by calling their constructors. However, simple objects such as a double will not be initialized). The default destructor does nothing with the object, although the object's space is always reclaimed. It's usually good practice to write your own constructor, or at least have a good reason for not doing so. Heres why: the objects data members may require specific initialization. For example, pointers should never be left uninitialized. Set them to 0 in the constructor if you dont know what else to do with them. Other simple values (double, int, char, etc.) will not be initialized, and its good practice to do that. Nor will arrays or structs. the object may contain a FILE object, in which case, you may want to open the file upon construction, and close it on destruction. If you use an iostream class instead of FILE, its constructor will be called, but you still won't have an open file. if your object A contains an embedded object B, the default constructor for B will be called, unless you explicitly specify a special constructor for it, in a way that well illustrate later. The default destructor will deallocate space in your object. It will also call the destructors of any embedded class objects. So if (for example) theres an open FILE object, you should explicitly write a file close in the destructor to do that. However, note that an iostream open file will be closed through its destructor call. (iostream is a proper C++ class, while FILE is just a C struct). If there are pointers in your object that have been allocated from the heap, the destructor probably should deallocate them through a delete call. Note that the operating system will clean up your heap at the end of your program if you dont. Note that a destructor takes no parameters. Any information needed by the destructor should be built into the object. For example, if a file might be opened or closed, that fact should be apparent from the objects data members when the destructor is called. For this reason, there can only be one destructor for a class. (There's no legal way to write two overloaded functions with no parameters and no return type).

Construction and Destruction with Base Classes


Suppose an object C is a derived class of some class B. Also suppose that C contains an embedded object de of class D, like this:
class B

Appendix 1: A C++ Primer, page 433

{ // something inside } class C: public B { D de; // more inside }

Now assume that an object of type C is being instantiated. Heres what happens: 1. Space is allocated for all the class data members, which includes space for B, C, and the object de inside B. 2. If any alternative initialization syntax is used, i.e.
complex(double r, double i) : re(r), im(i) {}

then these initializations occur before the next step. 3. The constructor for the Cs base class is called, i.e. Bs constructor is called. Notice that this occurs before the constructor for C is called. This works recursively up through any chain of inherited classes. If there are any objects embedded in this object, i.e. object de, their constructors are called. Notice that the space required for de is already provided for. We just have to call its constructor function. Also notice that this does not apply to pointers to objects, because the compiler has no way of knowing whether the pointer is valid. Pointers are left uninitialized and no heap space is allocated for them, by default. You need to spell anything like that out in the constructor. 4. If B has several different overloaded constructor functions, you can choose the one that best suits this constructor, like this:
C(C parameters) : B(B parameters) {}

5. This constructors code is executed. Notice that the most senior parent class is serviced first, then its descendants, and so forth down the chain of derived classes. Also, all embedded classes are valid at this time. This implies that the code for this constructor can assume that all of its base classes are validfunctions in the base class can be called to further specialize them, and initial values in the base classes can be depended upon. 6. The constructor for B calls the constructor of each embedded object, for example object de with class D. If D requires parameters, they must be drawn from other parameters valid for object B. The destructor works in the opposite fashion: If there are any embedded objects, their destructors are called. The destructor function for this object is called. The destructors for all base classes are called. Space for the whole data object is deallocated.

1. 2. 3. 4.

Note that the space for the base classes and embedded objects is retained until the very last step. When the space of an object is deallocated, some data member of the object is usually mutated, making that object invalid. The keyword virtual in a destructor causes any derived class destructor to be called instead. This is usually what you want read on.

Issues with New and Delete


The new operator must be applied to the object that you are interested in, not one of its base classes.

Appendix 1: A C++ Primer, page 434

For example,
vehicle *vp= new vehicle;

creates a vehicle class, but does not provide any space for a steering_wheel_radius. However,
car *cp= new car;

does provide space for a steering_wheel_radius. Also, the pointer cp, although declared as a car pointer, can be manipulated as a vehicle pointer. For example,
vehicle *vp= (vehicle*) cp; vp->weight= 15;

is legal. The cast (vehicle*) is required by some compilers as an assurance that you know what you are doing. What makes this work is that all the data for a base class has lower addresses than the data for a derived class (see Figure 2). Although the base class has no idea that its object in fact contains derived class material, it can nevertheless access its own data as though the derived data werent there. So our vehicle pointer vp can access weight, fuel_consumption and velocity as though the object it points to were merely a vehicle object rather than a car object. Of course, vp cannot access steering_wheel_radius, since it know nothing about any of its derived classes. The delete operator can be applied to a pointer of the allocated class or to any of its base class pointers. This works fine, provided that every destructor has the virtual attribute. It happens that the mechanism of deallocating space (from the heap or runtime stack) is independent of the C++ constructor/destructor system. It requires a pointer to the least address of the object, which the compiler of course can always provide.

More about the Alternative Initialization


Consider the special initialization syntax used in the following example:
complex(double r, double i) : re(r), im(i) {}

The construction sequence makes it clear why C++ needs the special initialization syntax
: re(r), im(i)

There are other issues, as follows: Only the members of this object can be so initialized. You cant initialize data members of a base class this waybut their initialization can be controlled; read on. You can only use this syntax in a constructor. Its illegal anywhere else. This initialization applies before the constructor code is executed. See the section Construction and Destruction with Base Classes above for details. This initialization can be used to defeat any use of const. If an integer is declared with const, then it can only be initialized like this, not in the {} code section. If you need to specify a particular constructor for a base class, call its constructor in this list, like this: BaseClassName(parameters). You cant do this in the {} code section. Its sometimes vital that some data members be initialized before the constructors for the base classes are called. These initializations are performed before any of the base initializations are done, and before the constructor code is called. The initialization form also permits a derived class to specify just how its base class will be constructed, when there's more than one constructor provided for the base class. You can carry that specification up through any number of levels of base class, including multiple base class members. The keyword virtual can be used to alter this ordering.

Appendix 1: A C++ Primer, page 435

Virtual Functions
The keyword virtual can be applied to any member function, including the destructor. It should never be used with a constructor, and should always be used for a destructor. virtual causes the function to be called indirectly through a pointer allocated in each object. This pointer and the indirection are invisible to you as a programmer, although you can uncover some of its mysteries through a runtime debugger. The C++ compiler will set this pointer for you, and guarantee that it is always valid, i.e. it will point to the function you want it to. Of course, this guarantee will be defeated if your program corrupts the pointer space in the object; youll then get a segmentation violation instead of a function call, most likely. The pointer may be set initially by the compiler, but later overridden by the linker, if theres a derived class function with the same name and types as this objects function. This is the key to the extensibility of C++ objects. Why are virtual functions important? In a nutshell, they make it possible to define some action in a class that can't be specified until some derived classes are defined. Notice that while a derived class knows about the data and members of a base class, the reverse is not truethe base class knows nothing about the properties of any of its derived classes. When you set up a virtual function in a base class, you are doing this: You want some operation to be done on "this" class (meaning the base class and any derived classes), You have some general operation in mind, but the way in which the operation will be carried out in detail depends on the derived class. A base class knows nothing about its derived classes, except for this mechanism -- it can "tell" its derived classes to do something through a virtual function.

Example -- A Drawable Class with Virtual Functions


Let's sketch an example in which it's important that the base class have some control over its derived classes. This is from the world of computer graphics. We start by declaring a drawable class, like this:
class Cdrawable { Cwindow& win; // a window in which to draw things public: Cdrawable(void); // constructor virtual void draw(int x, int y)const; // draw an object at position (x, y) };

The idea is that the class Cdrawable has a notion of a window, and a position in which to draw a single object. It says nothing about just what is being drawn -- that's the job of a derived class. The thing drawn might be a picture, a line, a rectangle, a circle, etc. Now we don't usually have just one thing to draw, we have a list of things to draw. Here's how that class might look:
class Cdrawlist { list<Cdrawable*> dlist; // etc. };

Here, dlist carries a linked list of pointers to Cdrawable objects. Cdrawlist should have methods to Appendix 1: A C++ Primer, page 436

attach one or more Cdrawable* objects to its linked list. Another method would draw the list of objects. In any case, we need to spell out what kind of objects should be drawn. For a circle, we might have this derived class:
class Ccircle: public Cdrawable { int radius; // radius of the circle public: Ccircle(int r) : radius(r) {} // constructor void draw(int x, int y)const; // replaces the Cdrawable draw function };

This makes it more explicit. We are drawing a circle of radius r, not just any object. But notice that the draw function will replace the one in the base class. This function should now draw a circle with its origin at point {x, y}. For a rectangle, we might have this:
class Crect: public Cdrawable { int width, length; public: Crect(int w, int l) : width(w), length(l) {} // constructor void draw(int x, int y)const; // replaces the Cdrawable draw function };

Notice in each of these that the draw member function has the same name and parameter types as the virtual function in Cdrawable. It therefore effectively replaces the one in Cdrawable, when a derived class is instantiated. If we instantiate a Ccircle object, the circle drawing function will be called when we call Cdrawable::draw. If we instantiate a Crect object, then its draw function will be called. In this way, the Cdrawable class can be written in such a way that it doesn't have to know how a particular drawable is drawn, nor does it care. It only needs to call its virtual draw function to cause its object to be drawn.

Pure Virtual Functions and Abstract Base Classes


A pure virtual function is declared like this:
virtual complex add(complex &x, complex &y)= 0;

The =0 part suggests that the pointer to this function is initially zero. But the compiler wont let that stand. The linker will require that this function be resolved in a derived class of this class. Any class that contains a pure virtual function is called an abstract base class. No abstract base class can be instantiated, since it would contain an invalid function pointer. You must declare a derived class in which all pure virtual functions are resolved, then allocate that derived class. A derived class can also contain a pure virtual function; it would also be an abstract base class. Eventually, some derived class must resolve every pure virtual function.

The Drawable Class


Here's a complete program that builds a list of drawables, draws them, then deletes them:
// Cdrawable and Cdrawlist illustrated... #include <iostream>

Appendix 1: A C++ Primer, page 437

#include <string> #include <list> using namespace std; class Cwindow { }; class Cdrawable { Cwindow win; // a window in which to draw things public: // draw an object at position (x, y) virtual void draw(int x, int y)const =0; }; class Cdrawlist { list<Cdrawable*> dlist; public: virtual ~Cdrawlist(); void attach(Cdrawable* d) { dlist.push_back(d);} void draw(int x, int y)const { list<Cdrawable*>::const_iterator di; for (di= dlist.begin(); di != dlist.end(); ++di) (*di)->draw(x, y); } }; Cdrawlist::~Cdrawlist() { list<Cdrawable*>::iterator di; for (di= dlist.begin(); di != dlist.end(); ++di) delete *di; } class Ccircle: public Cdrawable { int radius; // radius of the circle public: Ccircle(int r) : radius(r) {} // constructor virtual void draw(int x, int y) const; }; void Ccircle::draw(int x, int y) const { cout << "circle: radius " << radius << " at {" << x << ',' << y << "}" << endl; } class Crect: public Cdrawable { int width, length; public: Crect(int w, int l) : width(w), length(l) {} virtual void draw(int x, int y)const ; };

// constructor

Appendix 1: A C++ Primer, page 438

void Crect::draw(int x, int y) const { cout << "rectangle: [" << width << ',' << length << "] at {" << x << ',' << y << "}" << endl; } class Ctext: public Cdrawable { string text; public: Ctext(const string& t) : text(t) {} // constructor virtual void draw(int x, int y) const; }; void Ctext::draw(int x, int y) const { cout << "string '" << text << "' at {" << x << ',' << y << "}" << endl; } int main() { // start with an empty draw list Cdrawlist cl; // attach some objects to the list cl.attach(new Ccircle(15)); cl.attach(new Ccircle(20)); cl.attach(new Crect(5, 10)); cl.attach(new Ccircle(30)); cl.attach(new Ctext("one name")); cl.attach(new Crect(40, 50)); cl.attach(new Ctext("another name")); // now 'draw' them at position (100, 200) cl.draw(100, 200); return 0; }

Here's the output upon execution:


circle: radius 15 at {100,200} circle: radius 20 at {100,200} rectangle: [5,10] at {100,200} circle: radius 30 at {100,200} string 'one name' at {100,200} rectangle: [40,50] at {100,200} string 'another name' at {100,200}

Overloaded Function Names


Weve seen how the constructor for class complex is overloaded. Any member function name can be overloaded in a similar way. The same function name (and parameter list) can be declared in different classes. This is not considered an overloading, since they can be distinguished by their bound classes. The same function name can be declared several times in the same class, provided they have different parameter list types. C++ resolves function calls by name first, scope rules second, and the parameter

Appendix 1: A C++ Primer, page 439

types third. It ignores the return type when resolving identical names, although most compilers will issue a warning about different return types. If the same function name F (and parameter list) appears in both class X and class Y, and Y is a derived class of X, then Y::F is the default in Ys member functions, X::F is the default in Xs member functions. That is, the derived class definition overrides the base class definition, if theres a conflict. You can force a particular function (or data member), given some name conflict between a class and a base class by using the :: operator, i.e. X::F. This forces the use of class Xs F, even though there may be a function F in a derived class of X. This operator is not a cast; it merely serves to identify a particular class definition in order to resolve one name.

The friend Attribute


Suppose you have two classes X and Y. Theres no inheritance relation between them, but you want Y to be able to refer to Xs private and protected data, which are otherwise inaccessible. To do this, write
friend Y;

in Xs class definition. Class Y can then access any of X's data members and functions, even though they are private or protected. Class Y essentially has a set of keys to X's members. This is a onedirectional relationship--X does not have permission to access Y members unless there's a friend X; declaration in Y's class declaration.

Overloaded Operator Definitions


You can overload any C++ unary or binary operator with a function of your own. The operator is resolved by looking at the type or types affected. For example, if you want to write
c1 + c2

where c1 and c2 are both type complex, you can write a definition for complex addition using the + operator, like this:
class complex { double re, im; public: // other member functions friend complex operator+ (complex, complex); }; complex operator+ (complex a1, complex a2) { return complex(a1.re + a2.re, a1.im + a2.im); }

Notice that the operator+ function looks like an ordinary C function declaration, not a member function declaration. It also returns a complex value, and its parameters (a1, a2) are passed by value. Its important that there be exactly two parameters, since + is a binary operator. The friend attribute causes this operator to be accessible anywhere.

Example
complex c1(1,3), c2(3,5), c3; // declare some numbers c3= c1+c2; // adds c1 and c2 through the overloaded + operator

Appendix 1: A C++ Primer, page 440

The const Attribute


The attribute const can be applied to any data member, function parameter or function return value. It generally means that the affected data member is considered read-only. That is, any attempt to change the data member will be resisted by the compiler through an error message. Objects containing const declarations do not require a read-only memory (PROM). They often reside in RAM memory along with other variables, and are therefore changeable, in a hardware sense at least. The only protection offered them is at the compiler level. The compiler effectively guarantees that when some object is marked constant, then the compiler will generate an error on any attempt to change the object. This guarantee can only be met if the resulting program is well-behaved and contains only C++ code. If the programmer includes some assembler fragments, then the compiler cannot provide an ironclad guarantee of constancy. The programmer can also misuse pointers, array indices, or the union struct to overcome a const directive. Using const correctly and effectively may be a challenge to a programmer, since its main effect is to generate error messages that the programmer must repair. It's tempting to just avoid const altogether-but don't do it. Avoiding const altogether is not only unwise, but also impossible in some cases, for the following reasons: The C++ library functions use const in their function prototype declarations. You will sometimes be forced to declare certain variables const in order to make use of a library function you need. And that can cause a ripple effect through your declarations, forcing you to use const on certain of your variables and functions. By using const, the compiler can often introduce optimizations that would not be possible otherwise. For example, a constant integer will usually not require any variable memory space. It will simply be written into any instruction requiring it. Also, named constants can be combined with other constants arithmetically by the compiler, avoiding runtime arithmetic. The attribute const is how C++ provides a read-only characteristic to named objects. If you want to pass an object by reference to some function, but don't want that function to change anything in it, then const is the way to go. The difficulties arise through initialization, and deciding just what should be constant. These are best illustrated through a series of examples, given next.

Non-class Examples
Here's a complete C++ file with various const objects. No class objects are involved in this. We've added comments in Italics to explain the situation; they apply to the following line or lines. The reader should type up this file and try it out on their favorite C++ compiler.
int main() {

Variable fred is declared as a constant integer. Here, the C++ syntax permits initializing it on the same line. You can also initialize non-constant variables the same way.
const int fred= 15;

Trying to set fred to a new value is resisted by the compiler.


fred= 22; // ERROR

Here, two pointers to a char or char string are declared. What's constant is the thing pointed to, not the pointer. Here, the pointer is left uninitialized. You should consider the const attribute as referring to the object "*cptr1", i.e. the array, not the pointer.
const char *cptr1;

Appendix 1: A C++ Primer, page 441

You can also initialize the pointer. In this case, it will point to a constant string, which is compatible with the const char* type assigned to it. The keywords char and const can be interchanged, and mean the same thing.
char const *cptr2= "abcd";

Here's a non-constant pointer to a char or char array, which we'll use next.
char *ptr3;

You can change the value of a const char* pointer, provided that it continues to point to a constant char array.
cptr1= cptr2; // OK

Although ptr3 can normally be assigned-to (it isn't a constant pointer), the next line is illegal. If the compiler permitted this assignment, the programmer could later use ptr3 to change the value of the char array that cptr2 points to.
ptr3= cptr2; // ERROR

The next line is OK. Although ptr3 points to something that could legally be changed, having cptr2 doesn't violate any contract regarding keeping things constant.
cptr2= ptr3; // OK

Using cptr1 and ptr3 this way in a strcpy function is OK, because strcpy only reads the string in cptr1, while it changes the string in ptr3.
strcpy(ptr3, cptr1); // OK // ERROR

Using cptr1 this way is illegal, because strcpy wants to change what cptr1 is pointing to.
strcpy(cptr1, "new string");

Here's another usage of const. Here, the constant thing is the pointer. The thing pointed to is not considered constant. The compiler requires this pointer to be initialized to something. Since the thing pointed to isn't constant, it must allocate variable memory space for the thing, then copy the string "1234" into that space.
char *const ptrc1= "1234";

Another constant pointer, initialized the same way as ptrc1.


char *const ptrc2= "abcde";

The following is illegal because of the attempt to change a constant pointer.


ptrc1= ptrc2; // ERROR

This is also illegal for the same reason.


ptrc1= ptr3; // ERROR

This is legal, because strcpy won't change the pointer ptrc1, only whatever it's pointing to. See the comment above about how this pointer is initialized; it points to a variable memory space of at least five bytes.
strcpy(ptrc1, ptr3); // OK

This is also legal, for the same reason.


strcpy(ptrc1, ptrc2); // OK return 0; }

This can be very confusing to the beginning programmer, but it does have a certain rationale. You need to think in terms of how C declares things: the declaration resembles the usage. So if you need an array of pointers, the kernel of the declaration will look like this:
*thing[20]

since that's how you'd access one of the pointer members. The complete declaration needs a type designator, for example, char, and looks like this:
char *thing[20]

Note that [] has higher precedence than *, so this is an array of pointers, not a pointer to an array. Appendix 1: A C++ Primer, page 442

Also note that you must regard char as the type of one of the elements, not the whole array. That is, this declares an array of pointers to characters. A pointer to a pointer to something would be declared like this:
char **thing

and again, char refers to the ultimate target of these pointers. thing is a pointer to another pointer, which in turn points to a char element. Following that idea, the attribute const applies to the portion of the object that follows it in the declaration. So
char const **thing;

says that the object pointed to (through two indirections) is constant. thing itself can be changed, also *thing. But **thing (of type char) can't be changed. Now consider
char *const *thing;

Here, thing can be changed, *thing cannot be changed, and **thing can be changed. Finally consider
char **const thing;

which says that thing cannot be changed, but *thing and **thing can be changed.

Initialization of const Data Members in a Class


Using the const attribute with class data members follows the C ideas, except that the C++ syntax does not permit data members to be initialized in their declaration within the class. That is, you can't legally write this:
class myClass { const int sam= 15; }; // ERROR!!

Why not? Did Dr. Stroustrup overlook something useful here? Actually, no. The general situation is that an object can be constructed in a variety of ways, through several different overloaded constructors. When an object is constructed, it's reasonable to expect that any "constant" data members might be initialized in different ways, depending on their constructors. So constant data members must be initialized through their constructors, not through this one-sizefits-all initialization pattern. What's more, they must be initialized through the constructor initialization list discussed earlier, like this:
class myClass { const int sam; char const *fred; char *const mike; myClass(int p1, const char *s) : sam(p1), fred(s), // these are OK mike(s) // ERROR! {} myClass() : sam(0), fred(""), mike("1234") {} };

// these are OK

The initial values of the integer sam and the char array fred can be made to depend on how this Appendix 1: A C++ Primer, page 443

object is initialized. The compiler will ensure that sam will never be changed thereafter, once its initial value is fixed. Regarding the char array fred, it can be initialized to a constant array (formal parameter s) without a compiler complaint, because the declaration of fred is compatible with the declaration of s. However, mike cannot be initialized to s for the same reason that the compiler would reject the assignment
mike= s;

To initialize mike, you need a object declared with the attribute char *const. The array "1234" is acceptable. You cannot initialize a constant data member like this:
class myClass { const int sam; myClass(int p1) { sam= p1;} };

// ERROR!

What's being attempted here is assigning to a constant, since the body of a function, whether inline or not, is what the compiler is attempting to protect. The rationale here is that if one such assignment is permitted, then any number are; we could essentially write
sam= 10; sam= 15; sam= 20;

which is of course a violation of the spirit of protecting constants. On the other hand, the initialization list is designed to accept any particular data member no more than once. The compiler will notice that you've tried to initialize, say, fred twice and will complain about that.

Static Variables and Constants


The keyword static has two rather different meanings in C++. When applied to a global function or global variable (not in a class), it means that the function or variable name is local to this file. The name is not passed on to the linker, so it can't be referred to from another C or C++ file. This use of static is part of the C language and is carried over into C++ for compatibility reasons. In C++, static can be used as a class attribute, like this, but is ignored by the compiler:
static class myClass { };

Inside a class, static applied to a data member means that only one instance of that data member exists, regardless of how many class objects are created. The static data member is still considered to be a member of the class with regard to the private/protected/public protections, and most class features are supported. However, it cannot be initialized through a constructor, because several constructors can be written, implying several different initializations, yet only one value exists to be initialized. So any initialization must occur in a separate statement as illustrated below. Here's a class with two static data members with initializations:
class myClass { static int oneValue; static const char* str; myClass() {} // oneValue, str can't be initialized here }; // this requires the following for initialization int myClass::oneValue= 55; const char *myClass::str= "12345";

Appendix 1: A C++ Primer, page 444

The two initialization lines are in the same scope as the class declaration. The data member oneValue is connected to its class through the myClass:: prefix. Similarly the constant data member str is initialized through the separate declaration line as shown. It is also connected to the class data member through the myClass:: prefix.

Functions and the const Attribute


The const attribute can appear in these three different ways in a function declaration: 1. the return value of a function may be marked const. This is useful if the function returns a compound object such as a char* object. By returning a const char* object, the function is returning a pointer, but demanding that the thing pointed-to is constant. 2. any of the formal parameters may be marked const. These essentially place restrictions on how the function may use the parameters. For example, a formal parameter const char* p requires that what p points to cannot be changed in the function. 3. the data members of a class member function can be marked constant, meaning that the function will not change any of the data members of its class or any of its base classes during execution of the function. (A data member marked const can be changed by the function's caller, however. It will just not be changed during the call.) Regarding usage (1) above, the following declares a function that returns a char array whose contents are constant:
const char *func(parameters);

This function might return a pointer to a literal string like "abcde" which clearly should not be changed. Or it may have set up some string that should not be changed by the caller. Regarding usage (2) above, here's the prototype declaration of strcpy as found in a system include directory:
int strcpy(char* to, const char *from);

This says that the first char* pointer (to) is such that its contents (thing pointed to) can be changed. The second pointer (from) is read-only. The purpose of strcpy is to copy the characters found in the second char array to the memory location of the first one. So the second char array is read-only, while the first one must be writeable. Regarding usage (3) above, the following syntax declares a function in which the associated data members of its bound object are constant. That is, this function call is guaranteed to not change its internal state:
func(parameters) const;

Useless const Usages


When a function returns a simple literal, such as an int or double, theres no point in marking the return value const. Thus
const int func()

is equivalent to
int func()

The reason is that the value returned is a copy of the literal value, and the caller should be permitted to do as he likes with it. Similarly, when a simple variable is passed to a function by value, nothing is gained by the attribute const. That is, the declarations Appendix 1: A C++ Primer, page 445

void f1(const int v); void f1(int v);

are considered exactly equivalent. What happens is that a copy of the variable is passed in the runtime stack, so the function cannot possibly affect the original value. The appearance of & or * in a function formal parameter list may require the use of const. For example, this prototype
void f2(const int &v);

effectively passes a reference to an integer variable v to the function. The variable is marked readonly (const attribute) here.

Collections of Objects and Templates


Like C structs, we can organize a collection of objects into more complex forms, such as a linked list, tree or symbol table, using pointers embedded in the objects. There are several possible approaches to collections. One approach is to draw upon a special library of built-in container class functions provided with most C++ compilers. A container class is a special C++ class designed to organize a set of other objects in some way. Container classes are usually written using templates, which are a shorthand way of designing organization classes around some arbitrary class of your choosing. The Standard Template Library, or STL for short, provides a variety of container classes commonly used in C++ programs. We will be using these templates in our compiler code: vector, an ordered sequence of objects that can be indexed, string, a sequence of ASCII characters organized as a vector, list, a sequence of objects that cannot be indexed, but can be accessed from "front" to "back" with an iterator. Each container class has one or more iterators associated with the class. An iterator is like a pointer, in that it can be moved along from one member of the class to another one. It can be dereferenced to yield a particular element. Finally, it can take on a special "null" value, indicating that the pointer movement has reached the end of the sequence. We'll illustrate the use of some iterators in our examples. Another useful STL class is the map. Well use a map to associate identifiers with types. An identifier is just a string formed from letters and digits. A type is a special class object that describes how the identifier may be used within our source language. For example, we might declare the name myname as an integer array. Then later, when myname appears in the program in a different context, the compiler will know that it represents an array of integers. The push-down stack has an STL counterpart, which we won't use. We need to be able to refer to any object carried in a stack, whereas the STL stack permits only pushes and pops. However, it's easy to construct a template class using the STL vector class that supports what we need. We'll show how this is done next.

The STL string Class


We will be using the ANSI string class to support variable-length sequences of ASCII characters. As you can imagine, a compiler program uses strings for many different purposes. Most of them have lengths that are unpredictable until the compiler runs. Without a string class, we would be forced to allocate and deallocate these using the heap, in a way that would be both complicated and error-prone. Don't confuse this with the CString class found in the Microsoft Foundation Classes. You

Appendix 1: A C++ Primer, page 446

are of course welcome to use CString instead. In some ways, CString is more convenient to use than a string. However, it isn't portable to Unix platforms, while string is. The string class is actually a subset of the more general basic_string template class. A basic_string is similar to the vector template in that it can accept arrays of arbitrary length of any object. Most of the operations of the string template are supported by the basic_string class. You can find reference documentation on string in the Microsoft Visual Studio vs. 6.0 help facility. Look it up under the index feature, then follow the link to basic_string. Another tip-- the default font size is very small, almost impossible to read. In the help facility, choose View/Fonts, then pick a larger font size. Your new size is unfortunately not remembered from one VCC session to the next. Some operations are NOT supported for string, at least not yet. These include: o Reading a line from a file into a string, for example, cin.getline(line), where line is a string type (not supported!). Instead, you need to declare a char array long enough to carry the line. You can then copy the char array into a string variable using "=". (On the other hand, cin >> stringvar; works fine, but it doesn't read a whole line, just pieces separated by spaces). o Using sprintf with a string as the target, i.e. sprintf(svar, "d= %d", dv); This very useful formatting function only works with a char array. However, if you include the file lib/common.h in any program that needs this, you'll have a sprintf that does indeed format to a string variable. o Converting a string back into a char pointer: Use the string function c_str(). Say you have a string variable str. Then str.c_str() is a pointer to the char array hidden inside the str object. This is only useful for reading from the string. Don't try writing to the string with this pointer, or you will corrupt the string object internals. o The varargs string operation vsprintf doesn't work with a string. You need to declare a char array for the operation, then (if necessary) convert the char array to a string. o Declaring a static const string in a class. This doesn't work. Instead, use the oldfashioned static const char* type illustrated previously.

Some string Class Operations


These are illustrated in the following sample program. The #include <string> is necessary for string support, as is using namespace std. The #include <iostream> supports cout, cin, ostream, etc.
#include <iostream> #include <string> using namespace std; int main() { int index; // declare an initialized array of strings const string names[]= {"al", "betty", "charles", "frank", ""}; for (index= 0; names[index].size() > 0; ++index) cout << names[index] << ","; cout << endl; string s1,s2,s3="5678"; // declaring three strings // s1 and s2 have zero length // s3 contains 5678 initially

Appendix 1: A C++ Primer, page 447

int len; len= s1.size(); // len should be 0 s1= "abc"; // setting string s1 to a char array s1 += "def"; // concatenating another string to s1 cout << s1 << endl; // should print abcdef s1 += 'g'; // concatenating a character cout << s1 << endl; // should print abcdefg s1.erase(); // clearing the string to empty again s2= s1+s3; // concatenating s1 and s3, saving result in s2 cout << s2 << endl; // should print 5678 cout << s2[1] << endl; // should print 6 s2[2]= 'C'; // setting a character in a string by indexing cout << s2 << endl; // should print 56C8 cout << s2.substr(2) << endl; // prints 6C8 (starts at index 2 through end) cout << s2.substr(2,2) << endl; // prints 6C (starts at index 2, length 2) s2.erase(1,2); // removes 6C from s2 cout << s2 << endl; // should print 58 return 0; }

This program should print the following:


al,betty,charles,frank, abcdef abcdefg 5678 6 56C8 C8 C8 58

Caution
The string object can hold any number of characters. Its size will expand as needed to support concatenation and assignment. BUT - Indexing must within the current string bounds. (This applies to the [ ] operator, erase, substr and other string operators). If an index is out of bounds, you'll get an exception. As we've noted above, sprintf, vsprintf and getline aren't supported for the string class. (Microsoft's CString class is supported in these functions.)

String Iterators
An iterator is an object whose purpose is to "point" at some member of a container class. An iterator can usually be made to point at the "first" member, then moved to point to others. All of the STL container classes define two iterators -- one for non-constant members and the other for constant members. Think of an iterator as a pointer. Suppose p is an iterator for some container class. Then: *p or p[0] returns the container class member pointed-to by the iterator. This is like dereferencing a pointer. If the iterator is non-constant, you can use this on the left side of an assignment statement to "assign-to" the member.

Appendix 1: A C++ Primer, page 448

object.begin() returns an "initial" iterator, pointing to the "first" object (whatever that means) object.end() returns an "end" iterator. This points to "one-past" the "last" object. p++ or ++p advances the iterator to the "next" object in the container class. You declare an iterator like this. Suppose you have a template class stack containing myClass objects. Then
stack<myClass> mystack; // creates a stack stack<myClass>::iterator sp; // creates an iterator to the stack stack<myClass>::const_iterator csp; // creates a constant iterator

For the string class, an iterator can be declared in this fashion:


string::iterator sp; string::const_iterator csp;

Then sp.begin() points to the first character in the string. sp.end() points to "one-past" the last character in the string. ++sp advances the iterator to the next higher string index. *sp returns the character at the current iterator position. Thus we could print the characters of a string like this (the hard way):
string mystring= "abcde"; string::const_iterator csp; for (csp= mystring.begin(); csp< mystring.end(); ++csp) cout << *csp; // print one character cout << endl;

The constant iterator const_iterator permits reading from the string, but not setting any character in the string. Obviously, if you declare the string with const, then only a const_iterator is legal for the string.

Templates and a Stack Container Class


The STL stack class permits arbitrary push and pop operations, but not access to an arbitrary stack element. There's no way to extend it to provide these additional services. So we will write our own, using the STL vector class and templates. You can find this definition in the Qparser tools as lib/stack.h. Here's the class header:
template <typename K> class Cstack { private: vector<K> stack; int tos; public: Cstack(void) : tos(-1) {} Cstack(const Cstack& cs, int sx); // copy sx elements maximum Cstack(const Cstack& cs); // copy the whole stack void push(const K& element); // copy and push an element void setTos(const K& element, int offset= 0); // set an element K pop(); // pop the stack, returning the TOS element K& at(int sx) const {return stack.at(sx);} const K& stackRef(int pos); // return element at TOS-pos int size(void) const {return tos+1;} // number of elements void clear(void); // empty the stack // print contents to cout; requires cout << K void dump(ostream& out= cout); };

Appendix 1: A C++ Primer, page 449

K represents an arbitrary class whose objects this stack will carry. For example, if we need a stack of strings, then we can instantiate one like this:
Cstack<string> stringStack;

Well use the vector stack to carry our pushdown stack. This will be expanded in capacity as needed on the push operation. However, the stacks contents will often be less than its capacity, since objects are typically pushed and popped repeatedly. So the vector's capacity may be larger than currently needed. Rather than deallocate the excess capacity, we will just retain it. Variable tos will be defined as the index of the stack-top element. This will always be less than the stacks capacity. When the stack is empty, tos will be -1. This will also be its initial value. No destructor for Cstack is required, since the vector object's destructor will be called automatically when object Cstack is deleted. Note that the elements of type K will also have their destructors called, which may not be what you prefer. Two copy constructors will be needed. One copies the source stack up to a maximum of sx elements only. The other copies the whole stack. Note that the stack elements are also copied through their own copy constructors, whatever that implies. Here are the copy constructors:
// copy constructor, up to sx elements only template <typename K> Cstack<K>::Cstack<K>(const Cstack<K>& cs, int sx) { int index; stack.resize(cs.size()); for (index= 0; index< sx && index< cs.size(); ++index) stack.push_back(cs.at(index)); tos= stack.size() -1; } // copy constructor, all elements template <typename K> Cstack<K>::Cstack<K>(const Cstack<K>& cs) { stack= cs.stack; tos= cs.tos; }

The push member function is easy to write, using the vector functions resize and "at". We need to call resize to guarantee that the "at" function has a valid position in which to copy the element. tos is incremented before its use as an index, so that it becomes the index of the topmost stack element.
template <typename K> void Cstack<K>::push(const K& element) { stack.resize(tos+2); stack.at(++tos)= element; }

The pop member function pops the stack, returning a copy of the popped element. We test for an illegal stack underflow through an assert.
// pop one element, returning it template <typename K>

Appendix 1: A C++ Primer, page 450

K Cstack<K>::pop() { assert(tos >= 0); return stack.at(tos--); }

The stackRef function returns a reference to an element relative to the stack top. If you want the top element, use pos=0. For the element just beneath the stack top, use pos=1, etc. This function calls assert if the reference is invalid.
template <typename K> const K& Cstack<K>::stackRef(int pos) { // return a reference to the element at tos-pos, // where tos refers to the stack top assert(pos >= 0); assert(tos >= pos); return stack[tos-pos]; }

Function dump produces a stack dump that looks like this:


3 2 1 0 five six seven eight

The "0" refers to the stack top. (We pushed "five", then "six" then "seven" then "eight" in the stack). The output can be directed to any object compatible with ostream. The default output stream is cout.
template <typename K> void Cstack<K>::dump(ostream& out) { if (tos < 0) out << "(empty stack)" << endl; else { for (int k= 0; k<= tos; ++k) { out.width(3); // top of stack should be printed LAST out << tos-k << ' ' << stack[k] << endl; } } }

A Tree Class
Another useful class found in the Qparser lib directory is Ctree, found in tree.h. This provides a way of organizing objects as a tree. Theres no STL library object for a tree, and its difficult to design one as a template. So we will instead use Ctree as a base class for some attribute class. Here's the basic structure of a Ctree class:
class Ctree { Ctree* parent; Ctree* child; Ctree* sibling;

Appendix 1: A C++ Primer, page 451

etc. };

Each node carries three pointers. The parent pointer points up in the tree one level to the node's parent node. This will be NULL if the node is at the tree's root. The child node points to the node's leftmost child. The node may have more than one child by setting up a linked list of sibling pointers. The sibling node points to the node's right sibling. In general, this forms a singly linked list of nodes comprising all the children of the parent node. A typical tree is shown in figure 3 as Ctree nodes. The same tree, in abstract form, is shown in figure 4. Notice how the root node has three children. root points to the leftmost child, then that child points to its right sibling, which in turn points to its right sibling. You can think of a tree node as pointing to the head node of a linked list of children, each of which can point to the head node of a linked list of children, etc. Obviously, the setting of these pointers requires some structure. If a child pointer points back to a left sibling, we will have a circularity. For this reason, these pointers are carried internally as a private data structure. The tree constructor and accessor functions are structured in such a way to avoid (but not fully prevent) forming a non-tree graph.

root parent child sibling c1 parent child sibling c2 parent child sibling c3 parent child sibling

c1.1 parent child sibling

c1.2 parent child sibling

c3.1 parent child sibling

Fig 3. Tree represented as Ctree nodes

root c1 c1.1 c1.2 c2 c3 c3.1

Fig. 4. Abstract tree from figure 3.


This is NOT a template class. We will use it through inheritance. When we have an object that we'd Appendix 1: A C++ Primer, page 452

like to have carried within a tree, we simply inherit Ctree. We then acquire the Ctree pointers with each such object, as well as the Ctree access, insertion and unlinking functions. There's no such thing as an "empty" tree, except that we can create a Ctree pointer that is initially NULL. We can also use the sibling node to form a linked list of nodes. None of these has a parent, but the leftmost node has a sibling pointer to the next element in the list, which can point to another sibling, etc. This is sometimes useful in accumulating a list of objects in Qparser. If a top-level parent node has siblings, we need to be careful not to try to unlink one of the inner siblings. That will cause an assertion failure, since unlinking a node requires a parent, from which the previous node can be located. Also, if delete is called on a top-level node that is not the leftmost of its sibling chain, the left portion of the tree will be undeleted, and contain a dangling pointer. Every tree node created in a tree must be separately allocated from the heap. For example, its possible to insert a child node through a pointer to a node thats already in the tree. This error will go unnoticed until the tree is deleted, at which time, the heap manager will notice that the common node has previously been deleted.

Ctree Operations
What would we like to do with our Ctree class? Here are some supported operations. access one of the children of a node (toChild), access one of the siblings of a node (toSibling), access the parent of a node (toParent), insert a new child or a sibling (insertChild, appendChild, appendSibling), unlink a tree rooted in some node (unlink, unlinkSibling), delete a tree, given its root node (deleteChild, deleteOne, deleteTwo, deleteThree), how many children of a node? (children, siblings), print a tree in a structured way, using a virtual function (dump).

tree.h Header File


File tree.h is given below (it can be found in directory qparser\lib):
// tree.h #ifndef TREE_H #define TREE_H #include <iostream> #include <string> using namespace std; // This provides a tree structure to carry arbitrary objects. // The tree is formed by a child pointer, which points 'down', and // a sibling pointer which points to the right sibling. class Ctree { private: Ctree *parent; Ctree *child;

// points up the tree, NULL if the root // child tree

Appendix 1: A C++ Primer, page 453

Ctree *sibling; void void public:

// sibling tree

dumptree(ostream& out, string& prefix) const; deleteSiblings(Ctree *node);

// shallow copy constructor Ctree(const Ctree &ct) : parent(0), child(ct.child), sibling(ct.sibling) {} Ctree(Ctree *p) : parent(p), child(0), sibling(0) {} virtual ~Ctree(void); // removes tree nodes void setParent(Ctree *p) {parent= p;} Ctree *toSibling(int n= 1); // go to the nth sibling of this node Ctree *toChild(int n= 0); // go to the nth child of this node Ctree *toParent(void) const {return parent;} void insertChild(Ctree *tp, int pos=-1); // insert tp as leftmost child // by default, or just after child 'pos' void appendChild(Ctree *tp) {insertChild(tp, 32767);} // append tp to the child nodes void appendSibling(Ctree *tp); // append tp as a sibling Ctree *unlink(void); // unlink this one node, returning it Ctree *unlinkSibling(void); // unlink this node's sibling, returning it int children(void) const; // how many children in this node int siblings(void) const; // how many siblings of this node void deleteChild(int n= 0); // delete the child tree at index n void deleteOne(void) {deleteChild(0);} void deleteTwo(void) {deleteChild(1);} void deleteThree(void) {deleteChild(2);} void deleteBinChildren(void) {deleteOne(); deleteOne();} void deleteAll(void); // delete all siblings & children, but not this void dumpTree(ostream& out= cout) const; // do the whole tree structure virtual void dump(ostream& out) const=0; // do this one node }; #endif

Although there are many member functions here, the basic idea is simple. Given some tree node N, we can attach or insert a new child node C to N anywhere among the existing children of N (if there are any) through the function insertChild. The pos variable specifies a child position among 0 or more possible children already attached to N. For example, if pos is -1 (the default), then the new child is inserted to the left of any already there. If pos is 0, it is inserted just to the right of the leftmost child, pushing other children over one position to the right. If pos is a large positive number, the new child is appended to the right of any children already there. We can also append a right sibling to N through appendSibling. Appending implies attaching the new node at the rightmost end of any existing chain of sibling nodes. A node can be unlinked through unlink, which merely disconnects it from its existing tree, but does not otherwise change its child or right-sibling relations. One node can be unlinked through unlinkOnethis function effectively removes one node from the existing tree, causing its left sibling to be connected to its right sibling. Unlinking a node requires that the node have a parent. Only through the parent pointer can the node determine if it has a left sibling. This means in effect that unlinkOne that specifies the root node will simply return that node. The root is not expected to be part of some

Appendix 1: A C++ Primer, page 454

sibling list. Deleting a node (through the destructor ~Ctree) causes it to first be unlinked through unlinkOne, then it and all its descendant nodes are deleted. If this node is the root node, and it has no right or left siblings, then the whole tree is deleted. All these operations are safe in the sense that no pointers are left invalid after a delete or unlink operation. Of course, the structure must be a valid tree, and not some graph that contains a cycle or multiple paths between two nodes.

Preconditions
In constructing a tree, each node must be separately allocated from the heap. Its pointer is then passed to insertChild or insertSibling. Using a stale pointer, a pointer to a global or local Ctree object, or a pointer to an object already in a tree will cause a deletion error. When unlinking a node, that node must not be at the root level of the tree with left siblings. You can unlink any internal node, and you can unlink the leftmost sibling at the top level. Note that we can have a tree with several "root" nodes, i.e. nodes that are siblings of each other, but which have no parent. (This is how we use Ctree to support a simple linked list). Suppose root is the leftmost toplevel node. Then unlinking root->sibling will fail because there's no way to locate the left sibling of a node except through its parent. The nodes at the top level have no parent, so there's no way to find a left sibling, nor even to know whether there is one.

Implementation of Ctree
An implementation of the Ctree member functions may be found in the file lib/tree.cpp.

Using Ctree
Ctree is an abstract class and cannot by itself be instantiated. So Ctree must be a base class to some other class that defines its pure virtual function dump. For example, here's a simple derived class that carries an integer value on each tree node:
class Cntree : public Ctree { int value; // information carried by the tree public: Cntree(int v) : value(v) {} virtual void dump(ostream& out= cout) { out << value; } };

Note that we must declare function dump so that we can instantiate a Cntree. The purpose of dump is to report something about the information carried in the tree node to the output function cout. This will be used for debugging and reporting purposes. In Cntree, dump merely writes the integer value to cout. A root node is established by instantiating a Cntree from the heap, like this:
Cntree *node= new Cntree(3);

This allocates a node carrying the value 3. A child can be appended to any node by instantiating another Cntree, then calling appendChild, like this:
Cntree *newnode= new Cntree(4); node->appendChild(newnode);

Appendix 1: A C++ Primer, page 455

This creates a new child if none existed until now, or appends the new child as the rightmost sibling of those already there.

Dumping a Tree
A tree can be worked through systematically using function toChild and toSibling. These return a pointer to the leftmost child and the right sibling of this node, respectively. Either pointer may be null, of course. For example, the following code walks through a tree starting at node this, in a depth-first left-to-right fashion, printing each of the integer values in the order in which they are encountered:
void Ctree::dumpTree(ostream& out) { if (toChild()) toChild()->dumpTree(out); dump(out); // the virtual function in the derived class out << endl; if (toSibling()) toSibling()->dumpTree(out); } Cntree *node= new Cntree(0); // build a tree centered on node node->dumpTree(cout); // print the tree

Here, the function call dump(out) in fact calls the function dump declared in the derived class Cntree. We see that the Ctree class behaves somewhat like a container class. It holds something in its derived class (it doesn't know what), it provides all the tools needed to link the objects carried in a tree structure, and it can print the entire tree. When it must print the information in its (unknown) object, it calls the virtual function dump to do that. Unlike a container class, Ctree does not encapsulate its internal linkages, except through a few private data members (the pointers) and private functions. It is not a template. Instead, you must write a derived class to use its features.

Indenting the Tree Dump


The dumpTree function given above lists all the tree nodes at the same indenting level. It's hard to tell from the printed dump which are children and which are siblings. A better plan is to indent the information from some tree level by one unit more than the previous level. The tree levels are determined by the child relations. As one moves from a node at level L to one of its children, one moves to level L+1, so the children of some parent node should be indented more than their parent node. All siblings of some node will be printed with the same indenting. An easy way to do this is to design a helper function that carries the required indenting as a parameter. Function dumptree(ostream& out, string& prefix) is designed to do that. The purpose of the string prefix is to carry a marker string to write to cout just before reporting one of the tree nodes. In the Qparser implementation, this indenting string is embellished with vertical and horizontal bars to make the tree structure more evident, as well see next. Here's what the new indenting code looks like, using the helper function to carry an indentation:
void Ctree::dumpTree(ostream& out) const {

Appendix 1: A C++ Primer, page 456

string prefix= "+-"; dumptree(out, prefix); } void Ctree::dumptree(ostream& out, string& prefix) const { /* prefix helps format the child nodes, like this: +-root | +-c1 | | +-c11 | | +-c12 | +-c2 | +-c322 +-s1 +-s11

; first child of root

; second child of root ; first & only sibling of root ; etc.

Notice how the vertical lines show the children at various levels in the trees--this is the prefix string. */ out << prefix; dump(out); out << endl; int plen= prefix.size(); if (sibling) prefix[plen-2]= '|'; else prefix[plen-2]= ' '; prefix[plen-1]= ' '; if (child != 0) { prefix += "+-"; child->dumptree(out, prefix); prefix.erase(plen, 2); } if (sibling != 0) { prefix[plen-2]= '+'; prefix[plen-1]= '-'; sibling->dumptree(out, prefix); } } Cntree *node= new Cntree(0); // build a tree centered on node node->dumpTree(cout); // print the tree

The whole tree is printed, thanks to the recursive calls of dumptree. For example, in calling dumptree through a sibling node, we also print its sibling node, then the one after that, etc. Similarly, by calling dumptree through a child node, we also print all that child's siblings, and the trees under each of those nodes. Using a common indentation reference string in all these recursive calls is dangerous, but it happens to work in this case. (A safe strategy is to create a separate indentation record on the runtime stack for

Appendix 1: A C++ Primer, page 457

each call). The reason this is safe is that every call of dumptree preserves the state of this reference string between its entry and exit points, and that therefore a separate stack copy of the string would be exactly replicated in the stack for every call.

References
[1] Bjarne Stroustrup, The C++ Programming Language, third edition, Addison-Wesley [2] Y. Langsam, M. J. Augenstein, A. M. Tenenbaum, Data Structures Using C and C++, 2nd edition, Prentice-Hall [3] R. Decker, S. Hirshfield, Working Classes: Data Structures and Algorithms Using C++, PWS Publishing Co. [4] Allen I. Holub, C+C++ Programming with Objects in C and C++, McGraw-Hill, 1992 [5] Stephen Prata, C++ Primer Plus, third edition, Sams publishing, 1998

Appendix 1: A C++ Primer, page 458

Appendix 2: The Intel 80x86 Microprocessor


W. A. Barrett, San Jose State University napp2.doc, vs. 4.1

Introduction
In this chapter, we review the architecture of the Intel 80386 processor, and provide a capsule definition of the instructions used in the student compiler. The 80386 features have been carried over into subsequent 80486 and 80586 (Pentium) processors. This line of micros supports 32-bit RAM arithmetic, 32-bit addressing, multiple tasking, memory protection, and more. For arithmetic operations, the micro contains a full set of integer, floating-point and BCD arithmetic algorithms. We will focus on the so-called protected mode of this processor, which is most commonly used in modern personal computers (PCs) with the popular Windows and Linux operating systems. The processor also supports a normal mode, in which addressing is through 16-bit registers. It happens that every processor initially runs in normal mode, but is usually immediately switched to protected mode. Only normal mode was supported in the earlier platforms (8080, 8086, 80286), but is rarely used today. One exception is the 80186, intended for the higher-end embedded processor market, which has no protected mode. The 80186 line includes models that operate at very low power and modest cycle times, with few pins compared to a high-end Pentium. This by no means exhausts the features of the Intel 80x86 platform. For more details, the reader is urged to consult an Intel Reference Manual or any of several popular books on the processor, for example, [1, 2, 3]. Intel also provides online documentation for its line of microprocessors through it web site http://www.intel.com.

Organization and Registers

Appendix 2: The Intel 80x86 Microprocessor, page 459

Execution Unit (EU)

Bus Interface Unit (BIU)


Segment Registers

General Registers
EAX, EBX, ECX, EDX ESP, EBP, ESI, EDI
Instruction Pointer

Address Generation and bus control Operands Instruction Queue ALU


Memory/device Bus

Instruction Execution Logic

Status word (flags)


Figure 1. Execution and Bus Interface Units of Intel 80386 Microprocessor

An 80386 processor contains a set of registers, an arithmetic unit, a bus interface, an instruction sequencing and interpretation unit, and the necessary control logic that causes this device to operate as a von Neumann machine. See Fig. 1. The boxes represent information storage elements, and the dark arrows information transfer paths. Instructions reside in external memory, and are fetched, interpreted and executed by the processor through the Memory/device bus. This bus, which contains address and data lines, is provided as external logic pins. A few additional control pins provide enough logic support for the processor to access external RAM memory, send and receive information through ports, and to operate in parallel with a companion floating-point unit (FPU). The FPU is described later in this chapter. A register is a form of memory capable of carrying a fixed number of binary units (bits) of information. The register information can be rapidly copied to other registers, copied to external memory, memory, manipulated in various ways, and transferred to the arithmetic unit. The 80386 contains 10 registers with a capacity of 32 bits each, and 6 with a capacity of 16 bits. (Only four of the 16-bit segment registers are shown in figure 2. The other two, FS and GS are similar to ES and are rarely used). These are grouped into three general categories according to the operations that are most efficiently managed within each of them. Four of them are general registers, (EAX, EBX, ECX, ECX, EDX) and are designed to support integer arithmetic and logic operations. The next group of four are index and offset registers (ESP, EBP, ESI, EDI). These are used principally to assist memory addressing, and are particularly useful to access elements of a struct or array. The remaining four registers (CS, DS, SS, ES) are called the segment registers. These are used in conjunction with memory addressing.

Appendix 2: The Intel 80x86 Microprocessor, page 460

Each of these registers are hard-wired into the integrated circuitry of the chip. The basic operations defined on the registers, and the way in which random-access memory is accessed is built into the chip's design. As a programmer, you cannot alter DATA REGISTERS that design. Any software that operates on an 31 15 7 0 Intel chip is ultimately determined by the EAX AH AX = AH:AL AL built-in instructions and registers of this chip.
EBX ECX BH CH BL CL BX = BH:BL CX = CH:CL

The four data registers EAX, EBX, ECX and EDX are each 32-bits, and can support EDX DH DX = DH:DL DL 32-bit arithmetic. However, there are also POINTER and INDEX REGISTERS instructions that ignore the upper 16-bits of 0 31 15 these registers, using only the lower 16-bits. Stack pointer ESP SP* The four data registers AX, BX, CX and DX are the lower 16-bits of the full 32-bit Base pointer EBP BP* registers. These four 16-bit registers can in Source index SI* ESI turn be split into 8-bit halves by other Destination index EDI DI* instructions. Thus, register AH is the same as *normal mode only the upper 8 bits of the AX register, and AL is SEGMENT REGISTERS the same as the lower 8 bits of AX. If the bits 0 15 Code segment CS in AX are set by some instruction, then AH and AL are also set, and vice versa. Some of Data segment DS our compiler examples will use 16-bit Stack segment SS arithmetic and others 32-bit arithmetic. Both Extra segment ES are supported in protected mode. The reason that these lesser-precision INSTRUCTION POINTER and FLAGS 0 registers remain in the processor is to continue 31 to support legacy software. There are still EIP Instruction Pointer many PC programs running that were EFLAGS OD I T S Z A P C designed for the earlier 8086 and 80286 11 10 9 8 7 6 5 4 3 2 1 0 processors. Most users want these to continue to operate on their newer (and faster) PC, so Fig. 2. Intel 80386 Register Structure in protected mode Intel made sure that all earlier modes are supported on its more advanced systems. The status word register EFLAGS is partitioned into 9 bits that describe the general outcome of certain operations. (The remaining bits are reserved by Intel for future expansion). For example, if an addition or subtraction causes an overflow (value greater than can be held in the destination register), bit 11 (O for overflow) of the status word is set. We will also discuss the sign bit (S), zero bit (Z), and carry bit (C). The other bits have special purposes that dont interest us. The arithmetic unit, or ALU, is designed to accept two 8-bit, 16-bit, or 32-bit values, and perform one of several operations on the values, including arithmetic, bitwise logic, shifting, and more.

Instruction Cycle
A typical instruction cycle is as follows: An instruction is fetched from memory, and brought into the processor's instruction queue. Instructions vary in length from one byte up to six bytes in length. The first byte carries a code that determines the instruction length. The instruction queue in advanced Pentium processors is

Appendix 2: The Intel 80x86 Microprocessor, page 461

very large, permitting it to fetch instructions well in advance of the instruction currently being executed. Short program loops may also execute directly from the electronic instruction queue, making their execution much faster than if the external RAM had to be referenced each time. The instruction queue and its mechanism are effectively hidden from the programmer. For our purposes, we can just assume that one instruction at a time is fetched from memory by the processor, then executed. The current instruction is interpreted through the internal control logic circuitry of the processor. It may call for transferring bytes from one register to another, or transferring bytes between memory and a register, or for performing some arithmetic/logical operation. Instructions in the 80x86 may also require a variable number of clock cycles to execute, from 2 upward. For example, a multiplication of two 32-bit integers may require a dozen or more clock cycles. In any case, each instruction is designed to perform its function in a maximum number of clock cycles, and never to run forever. When the current instruction's operations are complete, the processor discards that instruction, and transfers control to the next instruction through another fetch cycle. Of course, the instruction is never truly "discarded"; it continues to reside in RAM until needed again. And, the instructions in RAM were originally copied from a hard disk or CDROM by a special program called a loader, prior to executing the program. Since the instructions are carried in memory along with all data, fetching an instruction is a matter of reading a particular sequence of memory bytes, starting at a particular byte address. That address is carried in the control register EIP, or instruction pointer register, which is 32 bits. Register EIP is incremented as instructions are fetched. For example, if an instruction contains 3 bytes, then EIP will effectively be increased by 3 just after fetching the instruction bytes.

Appendix 2: The Intel 80x86 Microprocessor, page 462

31

0 Logical offset: register or address Offset

15 Segment register

Segment Translation Table 32 bits 32 bits

ADDER

31 32-bit Physical Memory Address

Fig. 3. Intel 80386: Addressing 4 Giga Bytes

The variable-length instruction form used in the 80x86 line means that an instruction in memory may start at an arbitrary byte. However, most memory fetches are by 32-bit or 64-bit chunks, since the memory data bus is that wide. This saves a lot of time during processing, but makes the instruction queue mechanism rather more complicated. Some instructions should be on a word (16-bit) or quad (32-bit) boundary for highest performance. For example, most branch instructions should have a target that resides on a quad boundary. A target address of 0x7848 is preferred to (say) 0x7847 or 0x7842. The latter are not evenly divisible by 8, while 0x7848 is.

Addressing
Memory addressing in protected mode is done by combining an offset (32-bits) with a segment address. The principal segment registers are CS, DS and SS. CS is used for instructions (sometimes called code or text). DS is used for global data (sometimes called common data). SS is used for a runtime push-down stack, or simply, the stack. Notice that these segments are only 16-bit registers. A physical memory address is formed as shown in figure 3. We form a 32-bit segment address by using the segment register as an index into a segment translation table. Each entry in the segment translation table contains four 32-bit numbers, whose significance is too complicated for this introduction. Among the members is a 32-bit segment address. The segment address is then added to our 32-bit offset value to form a 32-bit physical address. See figure 3. Here's an example of how the segment address and offset are combined to form a physical address:

Appendix 2: The Intel 80x86 Microprocessor, page 463

segment register (16 bits)= 0x0020 segment address (from segment translation table)= 0x00004000 offset register value (32 bits)= 0x00051422 physical address value (32 bits)= 0x00004000 + 0x00051422= 0x00055422. The same physical address can be formed from different offsets and segment address values, depending of course on how the segment translation table is organized. And that table is organized by the operating system, not the casual user. This combination of a segment and an offset is done for each memory access. In the Pentium, the lookup and addition is performed by separate adder and logic paths, so that it can be done in parallel with the arithmetic and logical operations.

Paging
Paging is an alternative to segmentation in most architectures. The idea of paging is that memory can be divided into equal-sized non-overlapping pieces. For the Pentium systems, each piece is typically 4096 bytes, chosen because that size matches the size of a typical disk sector. Segments can be any size, from a few bytes to 4 Gbytes. The advantage of segmented memory is that a program's data space or code space will be in one contiguous block of memory. The disadvantage is that the operating system memory manager will have difficulty in allocating many such blocks, when they can have widely differing sizes. With a paging strategy, each page has the same size. That makes allocating pages and finding them much easier. When paging is selected in a Pentium system, the address that we called the physical address above goes through one more set of translation tables to yield the memory address. The necessary lookups and translations are done on the fly during program execution, so that the programmer is unaware of page and/or segment boundaries. The Pentium supports both paging and segmentation. The paging mechanism is more basic, and supports ordinary addressing as well as the segmentation mechanism. Paging also supports virtual memory. With paging, one bit of the paging lookup table is used to indicate whether virtual memory applies to this page. If it does, then yet another mechanism is invoked that checks whether the page is available, i.e. whether its in RAM or in the disk. If its available, then nothing more need be done. If it's not available, the OS is asked to read the page from disk. In this way, the amount of RAM memory available for a task can be much larger than the physical RAM. The additional space comes at the price of more disk access and a greatly slowed-down task.

Who's Responsible for What in Protected Mode


The programmer: can, but should not change any segment register, can change any of the data, pointer and index registers, can change certain of the EFLAGS bits, cannot directly change the instruction pointer EIP register (this is changed by executing a jump instruction), can read any segment register, will normally be concerned with the memory offset from a segment base through instructions, index registers, and the like, is constrained to certain sections of memory (data space, stack space, code space). Any Appendix 2: The Intel 80x86 Microprocessor, page 464

attempt to read or write outside these constraints will produce a segmentation violation. requests services from the operating system for any access to peripherals, other than memory. This includes the keyboard, mouse, video monitor card, serial port, parallel port, disk and more. These services are provided through special gateway function protocols. You cannot directly execute an INP (port input instruction instruction), for example, from a program. You must call a service function to access the device on the port. may write and execute interrupt service routines, but the interrupt servicing is the province of the operating system. In particular, DOS interrupts cannot be executed. The operating system bears these reponsibilities: Portions of it run in a special privileged mode that permits the OS programs to access any section of memory, read or write to ports, launch programs, service interrupt requests, and the like. Manages disk, video display and all other peripheral devices. These are usually organized around interrupts, DMA and the like so that they are serviced efficiently and in a timely manner. These are consider resources to be shared among several different tasks, so any one task cannot be permitted to directly access any of them. Manages tasks. When a programmer writes a program in C, that's not the end of it. Her program must be compiled and linked into an executable file. At some point, she will want to execute the program. At this point, the operating system steps in with these tasks: o Memory space must be allocated to the program: data, code, stack, heap. This usually involves a memory manager, which is responsible for allocating and releasing physical memory space. This is where the segment registers and segment translation tables are set up. o The program must be loaded, i.e. the code copied to memory, any initialized data to memory. Stack and heap are usually just allocated and are initially full of garbage. o The program, as a task, is assigned a position in a task queue, and will begin execution when the task manager decides it's appropriate to allocate processor time to it. o When a task needs peripheral services, the OS must respond to those requests. Usually the task is suspended until the service request is complete, although some requests can be made on a "no-wait" basis. o When the task is finished, the OS is responsible for closing any open files, and releasing the program's space.

Virtual 8086 Mode


The Pentium supports a virtual 8086 mode within protected mode. In this mode, a task can be launched and run as though it lived on an old-fashioned 8086. One megabyte of memory is allocated to the task, out of which the stack, data, code and heap must be allocated. The register set is the same as normal mode 16-bit registers. The segment registers are also 16 bits each, but memory addresses are figured differently: the offset (16 bits) is added to 16 times the segment value (also 16 bits). This yields a 20-bit memory address. Up to 2^20 bytes can be accessed this way, and thats one megabyte. DOS interrupt functions can be used, provided that the operating system supplies a virtual DOS COMMAND.COM utility. Ports can be accessed through interrupts, in a limited fashion. Port access is handled in a virtual way, to the extent supported by the containing operating system (usually Windows). For example, if you access a serial port through INP or OUTP, the instruction is directed to the appropriate OS service function, which will perform the operation in a proxy manner.

Appendix 2: The Intel 80x86 Microprocessor, page 465

Direct video access through memory references is also supported by messages sent to the video graphics card. The novel thing about virtual 8086 mode under Windows is that you can run several of these as separate windows. You can also choose a small DOS window to carry your information, or use the whole screen. When your 8086 program terminates, control is returned to Windows. 8086 virtual program crashes may kill Windows 98, but cannot kill Windows 2000. (Windows 2000 uses full protected mode, privileged accesses, etc, while Win98 uses a mixture of normal and protected mode services.)

Why Segment Registers?


Why was the chip designed this way? Surely the integrated circuit could have been smaller if we just used 32-bit registers to address memory. An important reason is that it permits program and data relocation. Consider the problem faced by the program loader when a new program is to be fetched from disk and loaded into RAM: there are almost surely several dozen other programs already running. Most of these are to support the operating system, and some are user programs that the computer user has chosen to run, but which are temporarily idle, or running in the background. This means that the RAM is already partially occupied with code and data. A new program is supposed to run, and the loader is expected to find a block of memory in RAM in which to load it. If every address in a program referred to an absolute physical RAM address, then the loader would have to adjust each and every memory address reference in the program, and (worse) make sure that the program is loaded into that specific location in memory. Either of these strategies is unacceptable today. Adjusting a large number of memory addresses is not only risky (possible loader/compiler bugs), but increases the size of the executable file, and increases loading time. Requiring that a particular location and block of memory is used for some program means that other programs (with a similar requirement) can't be carried in memory to support multi-tasking. The segment register mechanism implies that the loader need only choose a block of memory starting at some page boundary. Nothing in the program needs to change, only the setting of the segment registers (and their associated translation tables, page tables, etc.) The program can be loaded any place in memory for which enough contiguous space is available, and little or no address repair is needed. This significantly speeds up loading of a new program, something that every PC user appreciates. It comes at the cost of some additional CPU logic to support all that addition during running, but the time needed to do it effectively disappears. The operating system can also move programs and data around in memory, if the program is asleep. The trick is to just copy a segment block to somewhere else in memory, then change the appropriate segment register so that it again points to the origin of the block. Note that the program must be asleep during this shift, otherwise some locations in the block may change during the movement, becoming lost or corrupted. This rarely happens on a PC, but may happen at any time in a time-sharing multiprocessing environment, in which new tasks may be introduced or old ones completed while other tasks continue to run.

Why Four Segment Registers?


The 80x86 has six 16-bit segment registers, named CS, DS, SS, ES, FS and GS. The first three are involved in almost every program and are essentially controlled by the operating system and the loader. The latter three are available for special assembly-level programmer or compiler use. CS, the code segment register, points to the block of memory containing the program instructions at

Appendix 2: The Intel 80x86 Microprocessor, page 466

runtime. It works with the offset register EIP. Thus the next instruction to be fetched is CS:EIP. (In fact, instructions are prefetched through yet other registers invisible to the programmer. However, the programmer can view EIP as though it contained the offset of the instruction next to be executed). Segment register DS is the data segment register. By default, this will be combined with an offset obtained from any instruction that contains a memory offset, and also with registers EBX or ESI. Segment register SS is the stack segment register. The corresponding default offset registers are ESP, the stack pointer register, and EBP, the base pointer register. The 80x86 supports a variety of stack operations, including pushes, pops, and support for recursive function calls. Segment registers ES, FS, and GS are the extra segment registers. These can be set freely by a programmer, and used in most instructions as a substitute for DS. These will not be used in our compiler examples. A few extended instructions require the use of ES as a segment register. We will in fact set ES to the same value as DS, and just ignore the other two. A segment register is normally not specified as part of an instruction, unless its value must be fetched or set. However, virtually every memory reference involves one of the segment registers DS or SS, although the segment register won't appear in the instruction explicitly.

Booting a Computer and Loading Programs


Any program, assembler or otherwise, requires that the CS, DS and SS segment registers be set properly before execution can begin. When a processor is first powered up, its registers are set to particular values specified by the manufacturer. For the 80386, the processor starts at address 0xFFFF0, in normal mode, with IP= 0 and CS= 0xFFFF. Its important that the first instructions (corresponding to that address in particular) be in PROM, not RAM, so that a bootup program will be launched. The first bootup program is BIOS (Basic I/O System). PC users are familiar with this, since by pressing the DEL key during bootup, it can be configured in a variety of ways. When the BIOS has completed its tasks, it reads sector 0 of the principal hard disk (the C: drive under Windows). That's expected to contain a boot loader for the operating system (Windows, Linux or whatever), which then takes over. Eventually, your computer is ready to execute your programs, not just those built into the boot process, BIOS and the operating system. A compiler is involved in most of these programs, although many BIOS functions and some critical operating system drivers are written in symbolic assembler. Common to all these programs is the instruction set of the microprocessor, which is the same for all programs, whether it appears in the boot loader, BIOS, the operating system, or a user program. Since any user program will be started from DOS or Windows through a program that's already running, DS, CS, SS and EIP will be initially set by the operating system. Our compiler examples will be started through a C program which calls the external program pasMain. That C program expects that our example will use the segment registers DS, CS and SS that were set up for it. This root C program will of course be linked with the example assembly program to form a unified executable.

Assemblers
Most of the assembly code found in this text uses the Masm assembler conventions. See [1,2,3] for more information on the 80x86 processor and assembler. Many different assemblers for the 80x86 are available, some as freeware. The CDROM accompanying this text contains a licensed copy of Masm (ml.exe) that will run on any modern PC. It can also generate debugging information, and is compatible with the Microsoft Visual Studio C/C++ development environment. [4,5,6]. The debugging environment in Visual Studio for both

Appendix 2: The Intel 80x86 Microprocessor, page 467

C++ code and assembly is excellent, with special windows providing a view of all registers and selected memory locations. The debugger automatically switches from C mode to assembler as required by the source code. For those using Linux, the Gnu AS assembler can be loaded with the development package [7, 8]. We also recommend installing the documentation package, as it carries information and man pages regarding AS. The AS assembler uses a fundamentally different notation than Masm. Both can be used to generate the same binaries. This assembler is compatible with the Gnu gdb debugger. gdb is much more clumsy to use with assembler than Visual Studio, but it serves the purpose. With practice, one can become proficient in its use. Another freeware assembler, nasm, provides a syntax similar to that of Masm [9]. Unfortunately, it provides no tools for debugging, nor a companion C compiler. The syntax, although similar to Masm, has enough differences to make it useless for the purposes of this text.

Addressing Modes
The 80x86 supports several addressing modes, summarized as follows. An effective address is the result of the combination of various offsets and register values, as we'll see. It will be combined with a segment address to form the desired physical address. Direct addressing. The address displacement (8, 16 or 32 bits) from the segment origin is taken directly from the instruction. The instruction may be 3 to 8 bytes in length. The first two bytes specify the instruction and the addressing mode, and the remaining bytes will be the segment offset. In the case of the jump instructions, the offset is added to the current CS:EIP physical address. That is, all branch instruction offsets are relative to the current code location.

Example
.data memloc dd 0 ; allocate a 4-byte double word, containing 0 ; (see below for a more details) ; fill eax from memloc ; copy eax to memloc

.code mov eax,memloc mov memloc,eax

This deserves some explanation. .data says that the following instructions or data are to go into the data segment. .code means the following group are to go into the code segment. dd is not an instruction its a Masm directive data double, that causes a 4-byte 0 to be allocated in data space. The address of that 0 is given the name memloc so that we can refer to it later. The two mov lines are instructions. The first copies the contents of memloc to register EAX. The second one copies EAX to memloc. Register Indirect addressing. Here, the contents of one of the registers EBP, EBX, ESI or EDI is used as the offset from a segment register. If EBX or ESI is used, the default segment register is DS. If EBP is used, then SS is the default. If EDI is used, then ES is the default. These defaults can be overridden, though we'll never use that feature.

Example of Register Indirect Addressing


.data memloc dd 4,5,6 .code lea ebx,memloc+8 mov eax,[ebx] sub ebx,4 ; allocate space for a three 4-byte double words ; set register ebx to the address of the "6" ; copy memory at address ebx to register eax ; make ebx point to the "5"

Appendix 2: The Intel 80x86 Microprocessor, page 468

mov

[ebx],eax

; copy register eax to the location of the "5"

BEFORE this operation, memloc contains 4, 5, 6. AFTER this operation, memloc contains 4, 6, 6. The lea (load effective address) instruction takes the offset of memloc, adds 8 to it (8 bytes, essentially), then stuffs that offset into register EBX. The sub instruction subtracts 4 from register EBX. The notation [ebx] refers to a memory address whose offset is in register EBX. Based Addressing. Here, the effective address is the sum of a register address and an offset in the instruction. This is essentially a combination of the previous two addressing modes. This will be used with register EBP to determine the address of a local variable or a passed parameter in a function call.

Example of Based Addressing


.data memloc dd 4,5,6 .code lea ebx,memloc mov eax,[ebx+8] mov [ebx+4],eax ; allocate space for a three 4-byte double words ; set register ebx to the address of the "4" ; copy the "6" to eax ; copy eax to the "5" in memory

BEFORE this operation, memloc contains 4, 5, 6. AFTER this operation, memloc contains 4, 6, 6. Heres how that works: memloc is the memory offset of the value 4, which is carried as a four-byte integer. The lea sets register EBX to the offset of memloc. The first mov accesses memory location memloc+8, just as in the previous example, except that this offset is obtained by adding 8 to register EBX. Since the first word is at memloc, the second is at memloc+4, and the third is at memloc+8, this accesses the data value 6. Indexed Addressing. This is effectively the same as based addressing, except that different offset registers are used. See the previous example. Registers EBP, ESI and EDI can also be used in same way as EBX. Based Indexed Addressing. Here, the effective address is the sum of two registers (EBX or EBP combined with ESI or EDI) plus a displacement derived from the instruction. This mode is not used in the student compiler.

Example of Based Indexed Addressing


.data memloc dd 4,5,6,7,8 ; allocate space for some double words .code lea ebx,memloc ; set register ebx to the address of the "4" mov edi,8 ; set register edi to 8 = 2 double word sizes mov eax,[ebx+edi+4] ; ebx+edi+4 points to the "7" mov [ebx+4],eax ; copy eax to the "5" in memory

The based indexed address is in the second mov instruction the [ebx+edi+4]. In this, the contents of EBX is added to the contents of EDI, and 4 is added to that sum to form the memory offset. Since EBX was set to memloc, and EDI to 8, this addition accesses memloc+12, which is the 7 in memory. String Addressing. The 80x86 supports special many macro instructions to copy character strings from one memory location to another, or to compare two strings in a variety of ways. A special addressing mode involving ESI and EDI is used for these, in conjunction with the segment registers DS and ES. These will be used in certain support functions, but not generated directly

Appendix 2: The Intel 80x86 Microprocessor, page 469

by our compiler. Indeed, even highly optimized compilers will rarely generate these from C or C++ code.

Allocating Memory with db, dw and dd


Weve introduced examples of memory allocation in the previous section. Lets be more specific. Data segment memory is allocated through a data directive, such as db for an 8-bit byte, dw for a 16bit word, dd for a 32-bit double word, or dq for a 64-bit quad word. Suppose one word (two bytes) is needed in the data space of memory, then we write
.DATA myword dw 0

in the assembly code. The .DATA tells the assembler that the next block of information must be located relative to DS when the program is run. The dw stands for "data word", and asks that one 16-bit word be reserved in memory for the variable that will be called myword in future instructions. The 0 after dw sets the initial value of this word to 0. (It can be any number). You can specify a sequence of bytes, words or doubles like this:
.data astring db "this is a string of bytes",0 ; terminated by a 0 byte achar db 'a', 'b', 'c', 'd' morewords dw 1,2,3,4,8 dup(22) ; values 1,2,3,4 followed by eight value 22s

In this example, suppose that astring is assigned the offset 0x0004000. It is supposed to carry 26 bytes, or 0x1A bytes. So the next offset (achar) will be 0x000401A. This will be the offset of the letter a. Letter b will be at 0x000401B, letter c at 0x000401C, and letter d will be at 0x000401D. The next data group, morewords, will be at offset 0x000401E. This is the four words (1,2,3,4) followed by eight 22s, or 24 bytes in all (12 words = 24 bytes). Notice that a C-style string can be written with double-quote marks, as in the astring declaration. However, the terminating null character must be explicitly added at the end. You can also write bytes as individual characters as shown by achar.

.CODE and .DATA


Keyword .CODE (notice the leading period) tells the assembler that the next block must be located relative to CS when the program is run. Our strategy in the student compiler is to assume that everything is covered by .CODE. When we need to reserve data space, we will use the pair
.DATA .CODE

for data allocations. These are considered to be assembler directives or pseudo-ops. A directive results in no memory allocation or instructions by itself, but rather changes some mode or state in the assembly operations. These and other keywords can be in either upper or lower case. So .data is equivalent to .DATA. The assembler collects all the information under .data into one long list, also all the information under .code. These two lists are then used to allocate and initialize a data segment memory block and a code segment memory block. For example, if an assembly program contains this:
.data a dw .code 1,2,3,4

Appendix 2: The Intel 80x86 Microprocessor, page 470

b dw .data c dw .code d dw

5,6,7,8 9,10 11,12

Then at runtime, the data segment will carry the (2-byte word) values 1,2,3,4,9,10, while the code segment will carry the values 5,6,7,8,11,12. Notice how the data value 9 follows data value 4. Just how these will actually be placed in physical memory depends on how the loader decides to set the segment registers ds and cs. In any case, these memory sections will not overlap. Material placed in a code segment may be considered read-only at runtime. This is where the processor instructions belong, and any constant data values. Material placed in the data segment or the stack segment is considered readable and writable at runtime. There are no assembler directives for loading information in the stack, since the stack is considered to be purely dynamic in character. The stack is assigned memory space, but that space is not initialized it will contain whatever values were left over from some previous program. A user program will write and read the stack through its execution at runtime.

Data Transfer Instructions


The most common instructions are those that transfer data between two registers, or between a register and memory. This is what the mov instruction does.

mov Instruction
Instruction Purpose Mnemonic copy the contents of the source to the mov destination,source destination This instruction is misnamed. It does not in any sense move the source to the destination; it copies the vytes literally from the source to the destination. The source byte values are unchanged, but the destination byte values prior to executing this instruction are discarded and replaced by the copied bytes values. For example, if AX held 0x3456 and DX held 0x1234, then
mov mov ax,dx eax,edx

will change the contents of AX to 0x1234. DX will remain unchanged, with 0x1234. Similarly, will copy the contents of the 32-bit register edx to register eax. Either the source or the destination, but not both, can be a memory address. That is, mov can't be used to transfer bytes from one memory location to another. The source and the destination must refer to the same number of bits: 8, 16 or 32. It must also be possible for Masm to figure out which of these three sizes applies in each case. If a register is involved, its size determines whether a byte, word or double is copied. If a memory location is involved, its declaration (as db, dw or dd) determines the size. In some cases, Masm cannot work that out. For example,
mov [ebx],20

could be any of the three sizes. To resolve the ambiguity, you need to write in a type specifier, like these:
mov mov mov DWORD PTR [ebx],20 WORD PTR [ebx],20 BYTE PTR [ebx],20

Appendix 2: The Intel 80x86 Microprocessor, page 471

There is no mov instruction that copies a 16-bit source to a 32-bit destination or vice versa. (This is a form of conversion, and there are separate instructions for dealing with conversions, explained later). Let's look at the mov again. To copy the word currently in the memory location myword to register EBX, we can write
mov ebx,myword

The assembler can figure out that myword is relative to the segment register DS, that it's a double (not a byte or a word), and that it therefore makes sense to copy it into the 32-bit register EBX. The reverse transfer is equally easy:
mov myword,ebx

mov Instruction with a Literal Constant


Another form of mov transfers a constant value to a register. It looks the same as a memory reference instruction, but the binary form of the instruction has a different encoding. Here's an example:
mov AX,55

The 55 is interpreted as a decimal number. Since the destination is a 16-bit register, all 16 bits will be filled with the twos-complement form of the constant. The constant value is carried directly in the instruction, so no additional memory reference is needed. Such a constant must be compatible with the register. For example, the number 140,000 is too large to fit in a 16-bit register, but will fit in a 32-bit register. So
mov mov AX,140000 EAX,140000

will be flagged by the assembler as impossible, while is legal. The number can be expressed in binary, octal or hexadecimal. (We'll use hex for certain constants generated by the compiler). Any of these must start with a zero digit. A hex number must end with letter h or H. A binary number must end with letter b or B. An octal number must end with letter o or O. Here are some examples:
mov mov mov AX,0F0FFh AH,010101110b AH,377o ; hexadecimal ; binary ; octal

Named Constants
In the MASM assembler, a constant can be given a name, like this:
value5 EQU 5

This means that the symbol value5 will henceforth be regarded as the constant 5. The keyword EQU does not assign memory space at runtime to this value, it merely causes a replacement of the name by the constant during assembly. We can then use the constant name in a mov instruction, like this:
mov AX,value5

This looks exactly like a memory reference instruction, except that the name value5 was declared using EQU rather than DW. DW allocates memory at runtime, while EQU doesn't. All such names must be declared somewhere in the assembly program. Otherwise the assembler will not know how to organize instructions that use names. Obviously, the reverse form
mov value5,AX

is illegal, since a constant can't be changed. Masm will generate an error in this case.

Appendix 2: The Intel 80x86 Microprocessor, page 472

A mov can also copy a constant to a memory location. For example, if mem is some memory address, then
mov mem,55

is legal. mov instructions do not affect any of the flags.

Named Constant Strings


EQU can also be used to assign a name to a string. That string will literally replace its name wherever the name appears in the subsequent assembly file:
string5 equ <[ebx+22]>

The string referred to here is actually just


[ebx+22]

This form of equ uses the left and right corners as the string opening and closing delimiters. This permits you to use this named form to name compound constant structures, like this one:
msg1 equ <"Error in assembler",0>

mov With Extended Addressing Modes


Look at myword, which was declared earlier as a memory word:
.DATA myword dw 0 dw 5,7,9,11

We can ask for several words to be allocated in memory instead of just one, like this:
moreWords

This allocates four 16-bit words. The first carries 5 initially, the second 7, the third 9 and the last one 11. Recall that since these are memory words, their values may change when the program is executed. These are merely the initial values, set up before your program starts. The name moreWords will refer to the address of the first datum, word 5, in the assembler. We can also use a Masm macro feature to allocate lots of words, each initialized to a particular value:
stillMoreWords dw 256 DUP(17)

This allocates 256 words, in contiguous memory, each containing value 17. Bytes are allocated with db. Double-words (32 bits each) are allocated with dd. So
yetmoreWords dd 5,7,9,11

will allocate four 32-bit words (16 bytes in all). Now suppose we want to access the fourth word (11) associated with the name moreWords, declared above. That can be accessed in several different ways, as shown next:
mov ax,moreWords+6 ; direct addressing

In this example, the assembler will set up an address for the mov that is 6 bytes larger than the address of moreWords. The memory value "5" is at byte offset moreWords, value "7" is at moreWords+2, etc. Note that any constant offset like this is always measured in bytes, not in the units of the registers. That address (actually an offset combined with segment register DS) will be included in this particular mov instruction, making it a direct address. Now look at the second method.
mov mov ebx,6 ax,[ebx+moreWords] ; based indexed addressing

Here we've set the ebx register (register names can be either in upper or lower case) to 6, then use based-indexed addressing to fetch the word. moreWords still refers to the memory address, but the current value of ebx is added to it to form the effective address. Look at the third method,

Appendix 2: The Intel 80x86 Microprocessor, page 473

BEFORE (a) mov AX,15 AX: ???

AFTER

lea ebx,moreWords+6 indexed addressing mov ax,[ebx]

AX:

15

(b) mov AX,m1 memory m1: 55

m1:

memory 55

Here we use a new instruction, lea, or load effective address, to set ebx to the address of the third word in the moreWords group. This address will then be used in the mov to fetch that word. Notice that [ebx] is a memory reference, not a register reference. The brackets are suggestive of array access, but in assembler, they mean the memory address carried in the register.

AX:

???

AX:

55

Examples of mov Instruction


Figure 4 shows some examples of mov. In figure 4(a), a constant 15 is copied into register ax with the instruction
mov ax,15

(c) mov m1,AX memory ??? memory 175

m1:

m1:

AX:

175

AX:

175

(d) mov AX,[EBX] memory 323 memory 323

3175:

3175:

Before this instruction is executed, ax contains some 16-bit number that will be lost. After the instruction is executed, ax will contain the value 15. The source of the value "15" is actually built into the instruction, which of course was fetched from the code segment in memory. This instruction appears like this in memory:
B8 00 0F

AX: EBX:

??? 3175

AX: EBX:

323 3175

Here, the B8 is a code that essentially says that this is a mov instruction to register ax, with a 16-bit constant. The value 15 is in the next two bytes,
00 0F

Fig. 4. mov instruction examples.

which is the value 15 expressed in hexadecimal. Most assemblers have an option that lists the hexadecimal form of each instruction. By turning this on, you can see how symbolic assembler instructions are converted into a compact binary code that the microprocessor logic is designed to interpret. In figure 4(b), the number in some memory location which we've called m1 is copied to register ax. Notice that the previous value in ax is lost, and that the value in memory is not changed. In figure 4(c), the number in register ax is copied to a memory location m1. The previous value in that memory word is lost. The value in ax is not changed, however. In figure 4(d), register ebx contains the offset of some memory location, in this case 3175. The

Appendix 2: The Intel 80x86 Microprocessor, page 474

instruction uses the form [ebx], which causes the value in ebx to be treated as an address, not a value. The value at that memory offset, 323 in this case, is copied to register ax. As usual, the previous value in ax is lost, but the memory value at offset 3175 is unchanged. The move instruction in figure 4(d) can be reversed, like this:
mov [ebx],ax

This operation is not shown in figure 4, but it copies the value in ax to a memory location whose offset is in register ebx.

The lea Instruction


The lea instruction generates the offset of some memory word. (We've used this in some examples given above). Instruction Purpose Mnemonic copy the memory offset of the source to the lea destination,source destination Notice the difference between
mov ebx,moreWords ebx,moreWords

and
lea

The mov references the memory at runtime, to fetch a word stored there into register ebx. The lea doesn't have to access memory at all; it only needs to work out the effective offset of moreWords. In this case, that's an assembler constant, so the effective address will be carried directly in the instruction and just copied to ebx. You can also write
lea eax,[ebx]

This appears to fetch the value in memory at the offset in ebx. In fact, it just transfers the value in ebx to eax. In general, the offset appearing in the right operand is worked out and transferred to the left operand. So this is equivalent to
mov lea eax,ebx eax,[esi+ebx-25]

A more interesting example is which adds the values in registers esi and ebx, subtracting 25 from the result; register eax gets the result. In protected mode, all offsets are 32-bit, so the lea instruction requires a 32-bit register as its destination. In normal mode, offsets are 16-bit, and lea therefore expects a 16-bit register as its destination. The destination cannot be a memory location. This instruction does not affect the flags.

Appendix 2: The Intel 80x86 Microprocessor, page 475

Push, Pop and the Runtime Stack


We mentioned that the 80x86 supports a stack. The push and pop instructions transfer words between the stack and some other source or destination: Instruction Purpose Mnemonic copy a 16 or 32-bit source to the stack through a stack push push source copy the stack word or double to destination through a pop destination stack pop The stack in the 80x86 operates in what appears to be reverse order. When you push a word on the stack, the stack pointer ESP is decreased. So, for example, if ESP is equal to 0x0000FF00, and you push a double word in the stack like this:
.data w .code push w dd 22

Then ESP becomes 0x0000FF00 - 4 = 0x0000FEFC. ESP will always point to the last thing pushed in the stack. In this case, it will point to the stack copy of word w.

Allocating Stack Space


Recall that the stack is just another region of memory. You need to arrange for some stack space before starting your program, which can be done through the linker, using the option
link /ST:10000

This allocates 10000 bytes for the exclusive use of the stack. You can find this option by looking at file pasprogs\makefile, used to assemble and link the Pascal programs found in directory pasprogs. With Masm, you can also specify the stack space through the pseudo-operation
.STACK 10000

which allocates 10,000 bytes to the stack. Allocating stack space amounts to fixing the stack segment register SS, and the initial stack offset ESP. Since the offset is always positive, ESP will be set to 10000 initially by the linker directive. You can verify this by looking at the registers with a debugger just after starting your program. The space available for the stack therefore lies between the physical address [SS] and [SS]+10000. ([SS] refers to the segment base address specified by SS). As items are pushed into the stack, ESP will of course decrease. When ESP reaches 0, the stack is considered to be full. A push also decrements ESP first, then copies the source word to the address SS:ESP. If a word (16-bits) is pushed, then its copy in the stack will be at the offset addresses ESP and ESP+1 after the push. If a double (32-bits) is pushed, its copy in the stack will be at the addresses ESP through ESP+3 after the push. Since each push decrements the stack pointer, the location of something pushed in the stack relative to ESP will change with each push and pop. However, the memory location of any data pushed into the stack will not change. A pop is the reverse of a push. If a word is popped, then two bytes at offset ESP are copied to the destination, and ESP is incremented by 2. A double popped from the stack will increment ESP by 4. Register ESP is clearly the offset of the word at the "top of the stack", or TOS. If the address of this word is needed for some other purpose, register ESP can be copied to some destination like this:
mov ebp,esp

Notice that this does not affect the information in the stack, nor does it access the stacktop word. It

Appendix 2: The Intel 80x86 Microprocessor, page 476

merely copies the stacktop address to register EBX. push and pop do not affect the flags register.

Arithmetic Instructions
Arithmetic in the 80386 is supported for 8-, 16- and 32-bit numbers, in both signed and unsigned forms. It happens that the operations of addition and subtraction for signed twos-complement integers is identical to the operations for unsigned integers. The mnemonic instructions for these are add and sub. Each of them have a variety of binary instruction forms depending on their operand types and whether 8-, 16- or 32-bit numbers are being combined, but the assembler works all that out. So, as with mov, the add mnemonic actually refers to a set of instructions each member of the set specifies one of the three sizes, and the nature of the destination and the source. Thankfully, Masm knows just how to encode each form by examining the types of the destination and the source, whether they be a register, memory, or constant. Instruction Mnemonic add dest,source Purpose

add source to dest, with the result replacing dest subtract source from dest, with the result sub dest,source replacing dest The forms of the destination and source are the same as with mov. One or the other, but not both, can be a memory address. The source can be a constant. Both can be any data or index register. A memory address, if used, can take any of the addressing forms described earlier direct, indirect, indexed with offset, etc. Notice that the old value in the destination is destroyed, and replaced with the result of the addition. The source value is not changed. When the add or sub instruction is executed, certain flag bits are set. Recall that these are carried in the flags register, Fig. 2. In particular, CF, the carry flag, is set if the operation results in a carry out of the high-order bit of the result. Otherwise, it's cleared. SF, the sign flag, is set to 0 if the resulting number is 0 or greater than 0 (considered as a twoscomplement number), and 1 otherwise. (An overflow changes this) ZF, the zero flag, is set if the result is identically 0, otherwise it's clear. OF, the overflow flag, is set if the result is too large a positive number or too small a negative number for the destination, assuming that twos-complement arithmetic is used. Two other flags, PF, the parity flag, and AF, the auxiliary carry flag, are also set by these operations. They won't be used in our compiler. AF is useful for BCD arithmetic, and PF is useful in errorchecking and correcting algorithms. Refer to the Intel documentation for details. Our student compiler will not be concerned about overflow and carry. If the result of an addition or subtraction is out of range, you'll simply get an invalid number. Providing an appropriate trap for overflow will require a few extensions to the initialization functions, which we haven't done. We will make use of the SF and ZF flags, but only in the context of the comparison instruction cmp, described later.

Appendix 2: The Intel 80x86 Microprocessor, page 477

Negate, Increment, Decrement and Compare


Instruction Mnemonic neg dest inc dest dec dest cmp dest,source Purpose negate dest, with the result replacing dest replace dest with dest+1 (increment) replace dest with dest-1 (decrement) compare dest with source, setting flags only

These simple operations provide shortcuts for frequently used arithmetic operations. neg is essentially equivalent to subtraction from 0. Like the sub instruction, it accepts a destination, which may be a register or a memory location, and has an 8-bit, a 16-bit and 32-bit form. It also sets the flags in the same way as sub would. Instruction inc is equivalent to
add dest,1

except that the instruction is shorter and faster. Like add, it sets the flags. Instruction dec is equivalent to a subtraction of 1. The cmp instruction is equivalent to sub, except that the destination is not changed. Its sole effect is to set flags corresponding to the result of the comparison. The two flags of interest are SF and ZF. The following table shows how these two flags can be used to judge the comparative sizes of the operands. dest ? source SF, ZF flags after cmp < SF = 1, ZF = 0 SF = 0, ZF = 0 > SF = 0, ZF = 1 = For example, if the destination is less than the source, then dest - source is negative and non-zero. The sign flag SF will be set and the zero flag ZF will be clear.

Multiplication
Instruction Mnemonic mul opd (double) Purpose multiply EAX by opd, with result to EDX:EAX, unsigned imul opd (double) multiply EAX by opd, with result to EDX:EAX, signed multiply AX by opd, with result to DX:AX, unsigned mul opd (word) multiply AX by opd, with result to DX:AX, signed imul opd (word) multiply AL by opd, with result to AX, unsigned mul opd (byte) multiply AL by opd, with result to AX, signed imul opd (byte) imul op1,op2 (any) multiply op1 by op2, with the result going to op1 multiply op1 by op2, with the result going to op1, mul op1,op2 (any) unsigned Unlike addition and subtraction, multiplication depends on whether the two operands are signed or unsigned. (Try it) Two bytes can be multiplied, to yield a word product (8 bits x 8 bits -> 16 bits). Two words can be multiplied to yield a double word product. But note that for the first six instructions, one of the operands must be register AX, and the result is always in AX or DX:AX. (DX carries the mostsignificant word, and AX the least-significant word).

Appendix 2: The Intel 80x86 Microprocessor, page 478

In protected mode, two doubles can be multiplied to yield a quad result (64 bits). One of the doubles must be in register EAX (for the first six instructions), and the result will always be in EDX:EAX, with EDX carrying the most-significant double word and EAX the least-significant double word. The seventh and eighth instruction forms take two operands. These are just like the add instruction. op1 and op2 must be the same size 8, 16 or 32 bits. op1 is multiplied by op2, and the product replaces op1. The operands cannot both be in memory, and op2 may be a constant. Since two 8 bit numbers can be larger than 8 bits, the CF or OF flags are set (for mul and imul, respectively) if theres an overflow. Similarly, CF/OF will be set if theres overflow for 16 or 32 bit multiplication. mul is used for unsigned multiplication and imul for signed multiplication. The multiplier is given with the instruction, and can be a register, constant or memory location. Overflow can never occur for the first six instructions, since there's always enough space in the destination. However, in our student compiler, integers are carried in 32-bit signed form. So when two integers are multiplied, the result could be larger than a 32 bit double word. One can test for overflow anyway by checking both CF and OF; if both are set, then overflow has occurred. However, this requires additional instructions, and won't be done in the student Pascal compiler.

Division
divide EDX:EAX by opd, with result to EDX, EAX, unsigned divide EDX:EAX by opd, with result to EDX, idiv opd (double) EAX, signed divide DX:AX by opd, with result to DX, AX, div opd (word) unsigned divide DX:AX by opd, with result to DX, AX, idiv opd (word) signed divide AX by opd, with result to AH, AL, div opd (byte) unsigned divide AX by opd, with result to AH, AL, idiv opd (byte) signed Division also depends on whether the operands are signed or unsigned, so we have different forms for these instructions. A quad word dividend (in EDX:EAX), when divided by a double word divisor, yields a remainder EDX and a quotient EAX. A double word dividend (in DX:AX), when divided by a single word divisor (the operand), yields a remainder DX and a quotient in AX. A single word dividend, when divided by a byte divisor, yields a remainder in AH and a quotient in AL. The result of division can overflow the remainder. Also, division by zero will not only set the overflow flag, but will generate an interrupt, causing the operating system to abort the task. We have not defeated this interrupt in the student Pascal compiler, so that will stop the program. There is no two-operand form for division as for add, sub and mul. In the student Pascal compiler, integers are carried in 32-bit signed words. In order to start a division, we need to form a 64-bit quad word. That's done through the special instruction cdq, described next. Instruction Mnemonic div opd (double) Purpose

Appendix 2: The Intel 80x86 Microprocessor, page 479

Instruction Purpose Mnemonic convert EAX to EDX:EAX, signed cdq convert AX to DX:AX, signed cwd convert AX to EAX, signed cwde convert AL to AX, signed cbw The cdq instruction expands a signed number in EAX to the double word in EDX:EAX. In fact, it just sets EDX to 0 if the sign bit of EAX is 0, and sets it to 0xFFFFFFFF (all ones) if the sign bit of EAX is 1. This operation would require several instructions otherwise, so this is a useful shortcut. The cwd instruction expands a signed number in AX to the double word in DX:AX. In fact, it just sets DX to 0 if the sign bit of AX is 0, and sets it to 0xFFFF (all ones) if the sign bit of AX is 1. cwde converts a signed AX value to EAX. Instruction cbw extends the sign of the byte register AL to AH, effectively causing register AX to carry the same signed value as previously in AL. This instruction has no value to the student Pascal system, since there are no signed byte values supported. The char type is considered to be an unsigned integer, with the range 0..255. Division of an unsigned byte or word first requires setting EDX, DX or AH to 0, which is easily done with one of these instructions:
xor xor xor EDX,EDX DX,DX AH,AH

The xor operation (exclusive-or) causes all bits to become zero, regardless of their present state. It happens to be shorter and faster than a mov with a constant 0.

Example 1
Suppose we have a signed byte value in register AL. We wish to divide it by a 16-bit number in memory called dnum. Here's what the assembler code looks like for this operation:
.data dnum .code cbw cwd idiv dw 15 ; declare a memory word containing 15

; convert AL to AX, signed ; convert AX to DX:AX, signed dnum

The quotient will be in AX and the remainder in DX.

Example 2
Suppose we have a signed 32-bit value in EAX. We wish to divide it by the 32-bit signed value in EBX. Here's what the assembler code looks like:
cdq idiv ; convert EAX to EDX:EAX ebx

The quotient will be in EAX and the remainder in EDX.

Example 3
Suppose we have an unsigned 32-bit value in EAX. We wish to divide it by the 32-bit unsigned value in EBX. Here's what the assembler code looks like:
xor idiv edx,edx ebx ; clear the high-order bits in edx

Appendix 2: The Intel 80x86 Microprocessor, page 480

The quotient will be in EAX and the remainder in EDX.

Arithmetic Expressions in Assembler


Suppose we have an arithmetic expression in 16-bit signed integers, and we wish to evaluate it by writing the appropriate assembler instructions. Here's an example expression:
(a-b+15)/(x-y*8)

We first need to decide the correct order of evaluation of the operators. These follow algebraic rules, as follows: always evaluate things inside parentheses first. Parentheses take precedence over any operators on the parenthesized expression. operators + and - are evaluated from left to right They have equal precedence. operators * and / are evaluated from left to right. They have equal precedence. operators * and / take precedence over + and -. This means they are done first, even if they follow + or -. if you must save a temporary value, you can move it to an available register, or push it in the stack, later popping it back. Get used to pushing intermediate values in the stack, since that's how our compiler will do it. Following these precedence rules, heres what needs to be done, step by step. Weve introduced the names t1 and t2 to hold temporary values:
t1= a-b t1= t1+15 t2= y*8 t2= x-t2 result= t1/t2

We need to be aware that there are a limited number of registers that can be used for arithmetic (AX, BX, CX and DX). We consider them as temporary resources. We can use them to carry a value temporarily during arithmetic operations, but then can assign them to some other purpose later. We can't afford to just assign some program variable like x to a register permanently. Also note that when we subtract b from a, we dont want to change a or b in the process. Thats why we need temporary cells. Any of these four registers can be used for our arithmetic. However, AX and DX are required for division, and we have no choice in the matter. EBX plays a special role in memory indirection, and may not be available for arithmetic. Given these constraints, let's see how to evaluate the expression given above using assembler instructions. Let's first declare the variables a, b, x, and y as they would appear in an assembly program:
.data a dw b dw x dw y dw ; put these in the data segment 25 37 5 6

You need to declare each and every variable used in an assembly program, just as you do in C. Otherwise, the assembler will be unable to know what some name means. Each of these dw declarations also tells the assembler that the datum is a 2-byte word, a tidbit of information that it may need in order to construct an appropriate instruction, also to check your instruction's validity. Literal constants (such as the 15 and the 8) could also be assigned to memory locations. However, Appendix 2: The Intel 80x86 Microprocessor, page 481

they can also be built into an instruction directly, as we've seen with the mov instruction, and that's usually more efficient. The instructions add, sub, mul, imul, div, idiv can also accept an integer constant. If you want to assign names to literal constants, use the equ directive. Now that we have our variables assigned to memory locations, let's return to our expression:
(a-b+15)/(x-y*8)

The parentheses require that we work on a-b+15 and (separately) x-y*8 before performing the division. In the expression a-b+15, the subtraction is done, then the addition, following the rule that these operators are done from left to right (+ and - have equal precedence). So let's write assembler for a-b+15:
.code ; place what follows in the code segment mov ax,a sub ax,b add ax,15

Notice that the first mov instruction refers to the memory location a, which we've been careful to declare through a dw operation. This copies the 16-bit memory value (assigned to variable a) into register ax. The second instruction
sub ax,b

refers to the memory location b. After its executed, register ax contains a-b. The third instruction
add ax,15

uses the literal constant 15, which will be carried directly in the instruction at runtime. The 15 doesn't need a memory location assigned to it. And, incidentally, the assembler will notice that 15 is supposed to be added to a 16-bit register value, and so a 16-bit integer constant 15 will be attached to the instruction. If the instruction used eax instead, the same 15 will be expanded to 32 bits. The result of the addition will be in register AX: value is a-b+15. We're going to put this value aside so that we can work on the second expression, x-y*8, also using AX. A good place to save AX is in the stack, so we write a push instruction next:
push ax

We will pop the stack later when we need the first part of the expression. The expression x-y*8 requires that we do the multiplication first, since * has higher precedence than . Here's what that assembler might look like:
mov ax,y imul ax,8

At this point, the product y*8 is carried in AX. We pray that variable y is small enough that y*8 will still fit in a 16-bit register. If that's the case, then we can assume that AX contains the product. That's the assumption we'll follow in our compiler. This is a practical compromise, since otherwise every multiplication that appears in a program would require carrying more and more bits--two 32-bit multiplies yields a 64-bit product, two 64-bit multiplies yields a 128-bit product, and so forth. The programmer needs to be aware that all operations are done with finite arithmetic, and that an overflow is always a possibility. Overflow is possible with any of the arithmetic instructions. When an overflow occurs, the "OVF" flag bit is set, and can be tested immediately after the arithmetic operation. You can also configure an 80x86 to generate an interrupt automatically on any overflow. We're ready to do the subtraction next, assuming that the previous multiplication yields a correct value in AX. It will if there's been no multiply overflow:
sub neg ax,x

Appendix 2: The Intel 80x86 Microprocessor, page 482

What we've done is evaluate the expression


-(y*8-x)

which is algebraically equivalent to


x-y*8

We've done this because there's no reverse subtraction operation on the 8086. The instruction
sub ax,x

actually yields
y*8-x

in AX, which has the wrong sign. The following neg reverses this, yielding the right answer. We now have the divisor (x-y*8) in register AX, and the dividend (a-b+15) in the stack. We need to pop the dividend out, expand it into DX:AX, then perform the division. This also requires moving AX somewhere else, so that it won't be destroyed by our work on the dividend. Here's more instructions to carry all that out:
mov bx,ax pop ax cwd idiv bx ; save ax in bx. We'll use bx in the division ; get the dividend back ; expand it into dx:ax ; divide by the divisor

The instruction cwd is important to this operation. It expands the signed 16-bit integer in AX into an equivalent 32-bit signed integer in the registers DX:AX, as required by the integer division idiv. And that's all! The resulting expression value will be in AX, unless the divisor happens to be zero.

Evaluation Using 32-bit Arithmetic


The same expression can be evaluated using 32-bit arithmetic. (Or some combination of 8-bit, 16-bit and 32-bit arithmetic, but there's no point in mixing precisions). Let's evaluate the same expression using this extended arithmetic:
(a-b+15)/(x-y*8)

Begin with the variable declarations. Since these are now 32-bit values, we need to declare them with the dd (data double) operation:
.data a dd b dd x dd y dd ; place what follows in the data segment 576 89120 872 78

The instructions are essentially like those we used before, except that we now use the extended 32-bit registers eax, ebx, etc. for temporary values:
.code ; mov eax,a sub eax,b ; add eax,15 ; push eax ; mov eax,y imul eax,8 ; sub eax,x ; neg eax ; mov ebx,eax pop eax ; cdq ; idiv ebx ; place what follows in the code segment eax now carries a-b eax now carries a-b+15 save it till later eax now carries y*8 eax now carries y*8-x eax now carries x-y*8 ; the divisor will be ebx get (a-b+15) back: this will become the dividend form a 64-bit dividend the last division

Notice that we need to use cdq (convert double to quad) rather than cwd (convert word to double) to precede the division, since the division command requires a 64-bit number in EDX:EAX. (The least

Appendix 2: The Intel 80x86 Microprocessor, page 483

significant 32 bits are in EAX, and the most significant bits in EDX). After the idiv operation, the quotient is in EAX and the remainder is in EDX.

Storing the Result in Memory


Expressions like
(a-b+15)/(x-y*8)

almost never appear by themselves in a program. Instead, we find statements like this one, as it might be written in C:
z= (a-b+15)/(x-y*8);

This calls for evaluating the expression on the right side of the = operator, then storing the result in the lvalue on the left side of the = operator. Using 32-bit integer arithmetic, the assembler code now looks like this:
.data ; place what follows in the data segment a dd 576 b dd 89120 x dd 872 y dd 78 z dd 0 ; the 0 will be overwritten by our instructions .code mov eax,a sub eax,b ; eax now carries a-b add eax,15 ; eax now carries a-b+15 push eax ; save it till later mov eax,y imul eax,8 ; eax now carries y*8 sub eax,x ; eax now carries y*8-x neg eax ; eax now carries x-y*8 mov ebx,eax ; the divisor will be ebx pop eax ; get (a-b+15) back: this will become the dividend cdq ; form a 64-bit dividend idiv ebx ; the last division ; EVERYTHING above this line is the same as before!! mov z,eax ; store z

You will notice that the only difference is the very last instruction, which copies eax to the memory location assigned to variable z.

Writing a Complete Assembler Program


The assembly code in the previous section would not be accepted by the Masm assembler. What's needed besides the instructions are a header, some special directives and declarations for the variables. Here's a complete assembly program that Masm can accept and convert into an EXE file on a PC for execution:
; Example assembler program for Masm .386 ; (1) .model flat ; (2) .stack 4096 ; (3) .data ; (4) a dd 22005 ; (5) b dd 175 x dd -35 y dd -5 z dd 0 .code ; (6)

Appendix 2: The Intel 80x86 Microprocessor, page 484

extrn _main:near ; (7) public _pasMain ; (8) _pasMain proc near ; (9) mov eax,a sub eax,b ; eax now carries a-b add eax,15 ; eax now carries a-b+15 push eax ; save it till later mov eax,y imul eax,8 ; eax now carries y*8 sub eax,x ; eax now carries y*8-x neg eax ; eax now carries x-y*8 mov ebx,eax ; the divisor will be ebx pop eax ; get (a-b+15) back: this will become the dividend cdq ; form a 64-bit dividend in EDX:EAX idiv ebx ; the last division mov z,eax ; store z ret ; (10) _pasMain endp ; (11) end ; (12)

Here's a description of each of the new elements in this program, labelled (1), (2), ... (11), (12): (1) The .386 tells Masm to assemble the instructions assuming a minimum 80386 processor. This works equally well for the 80486 and Pentium processors, but not the 8086 or 80286. (2) The .model flat tells Masm to prepare instructions for protected mode. (3) The .stack 4096 tells Masm to allocate a stack of 4096 bytes when the program is loaded. This isn't used in the assembler, but is passed on to the loader. We need a stack because of the push and pop instructions. (4) The .data tells Masm to place the following instructions or data allocations in a data segment, controlled by the data segment register DS. (5) This is how memory space in the data segment is allocated. It also tells Masm that the variable a is a double word (32 bits), and that its initial value should be (decimal) 22,005. A similar declaration is needed for the other variables. We assign 0 to variable z because it'll be assigned-to in the last instruction. Its initial value therefore doesn't matter. (6) The .code tells Masm to place the following instructions in a code segment, controlled by the code segment register cs. The memory space for data and code will be in separate memory blocks when the program is loaded. (7) This extrn tells the loader to look for a main procedure in the associated pasmain.c file. (8) The public _pasMain tells Masm that some external program will want to access the program at the location of _pasMain. The underbar is needed for Microsoft Masm/C linkages. The companion C program will want to call this one as a function, like this: pasMain(); (9) This assigns the symbol _pasMain to the next program location, which is where our assembly code starts, as a function to be called. (10) The ret instruction (explained later) returns control to the C program, just after the function call pasMain(). (11) This endp marks the end of the function of the same name started above. (12) end is an assembler directive that says this is all there is to this program.

Executing the Assembler Program


Refer to the lab notes on assembling and executing this program. This very much depends on the nature of your computer resources, whether it be a Microsoft Visual C++ environment, Borland C/C++, Appendix 2: The Intel 80x86 Microprocessor, page 485

or Linux/86. The Masm assembler syntax is supported under the Microsoft and Borland environments. Under Linux/86, the assembler syntax is quite different, although the instructions are the same. (An example written for Linux using AT&T syntax notation is given at the end of this chapter). You will need some way to call pasMain. You can write your own C calling function, like this:
#include <iostream> using namespace std; extern C pasMain(); int main() { pasMain(); return 0; }

If you succeed in compiling this, linking it to your assembler object file, then executing it, you'll discover it apparently does nothing. There's no output! You can change that a little by printing the return value of pasMain. This will be the value left in EAX at the end of the program, which is to say, z:
#include <iostream> using namespace std; extern C int pasMain(); int main() { cout << pasMain() << endl; return 0; }

Now you should see 4369 printed out when this is run. The C in the extern is needed in a C++ compiler so that the name _pasMain is sent to the linker with no type decoration. Without it, youll get some linker errors.

There's More than One Way to Write Assembler


You'll discover that there are usually several different ways to describe an operation in assembler. The trick is to find the most efficient way. Here are some guidelines, some of which we've followed in our example above: Try to find the fewest possible instructions to perform the operation. Fewer instructions mean fewer instruction cycles when the assembler code is assembled and executed. You may have to experiment with different approaches to find the least number of instructions. This will also give you a healthy appreciation of what's expected in a compiler! Try to avoid saving temporary results in memory. Operations between registers are considerably faster than memory fetches and stores. In particular, notice that push and pop are memory operations. In some cases, you can use another register to save a temporary result to avoid using a push or pop. (We could have done that in our example computation). Look for the most efficient order in which to carry out the operations. Sometimes you need to evaluate the right operand of an operator before evaluating the left operand, in order to avoid pushing and popping a result, or having to issue a neg instruction because a subtraction was done in the wrong order. Note that multiply and divide require the use of the EAX and EDX registers. You need to plan ahead so that (if possible) the values you want appear in the appropriate registers to prepare for these operations.

Appendix 2: The Intel 80x86 Microprocessor, page 486

Look for algebraic equivalences that might save instructions. The smallest algebraic expression is not always the best way to evaluate the expression. These issues must be confronted by any optimizing compiler, clearly. An important task of a compiler is to find the most efficient sequences of instructions that implement a particular expression or language feature. This can be a challenging puzzle, as we'll discover. The mechanisms of parsing and compilation are usually not compatible with the most efficient target code.

Bitwise Logical Instructions


The following bitwise instructions are supported in the 80x86. They accept source and destination operands in a way exactly like the add and sub instructions. 8-bit, 16-bit and 32-bit operations are supported. These also clear the flags OF and CF. SF, ZF and PF are set according to the result. AF is undefined. Instruction Purpose Mnemonic dest is replaced by its bitwise complement not dest dest is replaced by the bitwise dest AND and dest,source source dest is replaced by the bitwise dest OR source or dest,source dest is replaced by the bitwise dest XOR xor dest,source source The logical combinations of not, and, or and xor are given by the following Boolean table: a 0 0 1 1 b 0 1 0 1 not a 1 1 0 0 a and b 0 0 0 1 a or b 0 1 1 1 a xor b 0 1 1 0

These functions apply to each pair of bits in the source and destination. Like their arithmetic counterparts, these can be applied to bytes or words. For example,
mov mov xor al,011000011b bl,000011011b al,bl

yields 011011000b in register al. We can also see from the xor table why
xor ax,ax

always yields ax = 0. The corresponding bits must always be the same, and whether 0 or 1, the xor is zero. These operations will be used in our student compiler to support the Pascal operators and, or, and not.

Shift and Rotate Instructions


There are several instructions designed to shift bits right or left within a register. Since Pascal has no

Appendix 2: The Intel 80x86 Microprocessor, page 487

shift operation per se, these won't be used in the generated code. However, they do appear in the library files, so here's a brief description. For more details, refer to the Intel manual. Instruction Purpose Mnemonic shift bits in dest left by count bits shl/sal dest,count shift bits in dest right by count bits, logical shr dest,count shift bits in dest right by count bits, arithmetic sar dest,count rotate bits in dest left by count bits rol dest,count rotate bits in dest right by count bits ror dest,count rotate bits in dest:CF left by count bits rcl dest,count rotate bits in dest:CF right by count bits rcr dest,count The shift instructions transfer the bits right or left in the register. Bits transferred out of the leastsignificant position (right shift) or most-significant position (left shift) are lost. In the left shifts, zero bits are brought into the least-significant position. For example, with shl ax,1, the bits in ax are shifted left by one position. The sign bit is lost, and a 0 bit is brought into the least significant bit. An arithmetic shift (sal, sar) is equivalent to multiplying (left shift) or dividing (right shift) by a power of two, preserving the sign of a twos-complement number. Thus a fast way to multiply by 2 is a shift left by 1 position. A logical shift (shl, shr), zero bits are brought into the least or most significant positions. Note that shl and sal are equivalent, but shr and sar are not. The rotate instructions perform a shift, but the bits transferred out of the least or most significant position are transferred back into the other end. For example, with rol ax,1, the bits in register ax are shifted left by one position. The bit shifted out of the sign bit is copied into the least significant bit position. In the 80x86, these instructions can only operate on one of the data registers, in 8-, 16-, or 32-bit format. A shift of a memory word is forbidden. The count may either be a constant or the contents of register CL. By using CL, the shift count may be supplied at execution time.

Some Examples
Suppose AL contains 10001001b, and CF (the carry flag) contains 0. Then: shl al,1 and sal al,1 yield 00010010b. shr al,1 yields 01000100b. sar al,1 yields 11000100b. rol al,1 yields 00010011b. ror al,1 yields 11000100b. rcl al,1 yields 00010010b, and CF contains 1 (the new CF value came from the sign bit) rcr al,1 yields 01000100b, and CF contains 1 (the new CF value came from the least bit)

Examples of Masking and Combining


We sometimes need to extract a few bits from a byte or a word, then transfer those bits to some other word. For example, suppose that we are interested in bits 3 through 5 in register AX. We wish to extract these three bits and transfer it to bit locations 12-14 in register CX. We do not want to change any of the other bits in either AX or CX, but we can use BX for the purpose. Problems of this nature can usually be solved like this:

Appendix 2: The Intel 80x86 Microprocessor, page 488

copy the source register S to a free register R extract the bits from R by using the and instruction with a mask. The mask is a constant word that contains a 1 bit in the bit locations we want and a 0 bit elsewhere. The and operation then forces all the bits to be a 0 where the mask is 0, and to be the desired bits where the mask is 1. shift or rotate the bits in R so that they will align with the destination register D. use the complement of a mask on register D with and. This has the effect of forcing the target bit positions to 0, but keeping the other bits the same. combine registers R and D with or. This will effectively "drop" the desired bits extracted from S into the new position in D. For our particular example, the following instructions will do the trick:
mov and mov mov shl and or bx,ax bx,0111000B ; extract bits 3 through 5 bh,bl ; transfer bits 3-5 to bits 11-13 bl,0 bx,1 ; shift bits 11-13 to 12-14 cx,01000111111111111B ; mask out bits 12-14 in cx cx,ebx ; drop in the bits from register ebx

Although we need seven instructions to carry out this operation, they execute rapidly because all the operations are on registers, internal to the microprocessor. Bit-level operations of this kind are difficult to optimize because there are rarely special instructions provided for them. It's usually better to use full bytes or words rather than try to economize on memory.

Branch Instructions
There are a large number of branch instructions supported by the 80x86. We will only be interested in a few of them, as follows: Instruction Purpose Mnemonic branch unconditionally to the dest address jmp dest branch to dest if the flags indicate "equal" je dest branch to dest if the flags indicate "not equal" jne dest branch to dest if the flags indicate "greater than" jg dest branch to dest if the flags indicate "greater than jge dest or equal" branch to dest if the flags indicate "less than" jl dest branch to dest if the flags indicate "less than or jle dest equal" The effect of a branch instruction is to change the EIP register. Recall that EIP controls which instruction is to be fetched next by carrying its address. Therefore, if any of the branches in the table above are effective, EIP is changed to the dest address. Clearly, dest should be a code address, not a data address. When EIP is changed, the instruction queue in the CPU is invalidated and must be rebuilt. This isn't much of a problem for the 8086, but the Pentium-class micros carry a significant instruction queue that becomes invalidated. Rebuilding the instruction queue will require some additional memory cycles. These are interlaced with instruction execution and may not be noticed at all. The overall performance is much higher with a queue since often an entire program loop can be executed out of the microprocessor queue, requiring no instruction fetches from memory. Appendix 2: The Intel 80x86 Microprocessor, page 489

The code addresses for these instructions must be simple addresses. Such addresses as [ebx], [ebp+6], dest+10, etc. are not supported. The dest values are an offset from the current IP address, not CS. For the 8086, the maximum offset is 32767 bytes for jmp, and only 127 bytes for the conditional instructions. For the 80386, the maximum is 32767 in all cases. Since 127 bytes is only a few instructions, we've chosen the 80386 as the minimum machine for the student Pascal compiler. The jmp instruction is unconditional. The next instruction executed is always the one specified by the dest address. Any instruction following a jmp will never be executed, unless it's the target of some other branch instruction. The remaining instructions are conditional. If the condition is met, the next instruction executed is the one specified by the dest address. Otherwise, the next instruction executed follows the conditional jump. These six instructions cover all the possible results of a cmp instruction, and are used to work out the control flow as it depends on the relative values of some operands.

Example 1
A Pascal while-do statement looks like this:
while a<b do a:= a+1;

This can be expressed in 80x86 assembly as follows, using the cmp, conditional jump jge and unconditional jump jmp instructions:
w1: mov cmp jge inc jmp w2: ax,a ax,b w2 a w1 ; ; ; ; compare a<b if the comparison fails, go to marker w2 if the comparison succeeds go back to marker w1 and do another comparison

The cmp instruction effectively subtracts b from a, setting the flags register. For example, if a is actually less than b, then the "sign flag" SF is 1, indicating that a-b is negative, and the "zero flag" ZF is 0, indicating that a-b is not zero. These two flags are tested by the jge instruction ("jump if greater-than or equal"), which would decide that the branch to w2 should not be taken, so instead the following inc instruction is executed. The inc performs the a:=a+1 operation. It's followed by jmp w1, which returns control to the first instruction. It's clear that these five instructions will be executed repeatedly until the value of a is equal to or greater than the value of b. When that occurs, the jge instruction causes control to pass to location w2. This marks the following sequence of instructions, whatever they are.

Example 2
A Pascal if-then-else statement looks like this:
if a>=b then a:=a+1 else b:=b+1;

This can be expressed in 80x86 assembly as follows, again using the cmp instruction, and a conditional jump jl.
mov cmp jl inc jmp L1: inc L2: ax,a ax,b L1 a L2 b ; ; ; ; ; comparing a>=b jump to the "else" part if a<b do the a:=a+1 must skip the else part the "else" part

Appendix 2: The Intel 80x86 Microprocessor, page 490

Notice that we need an unconditional jump to skip around the "else" clause (the jmp L2). Otherwise, the inc b will be executed just after the inc a is done. You should also realize that some more instructions must follow the marker L2. Otherwise, the processor will just try to interpret whatever bytes are in memory there.

Example 3
A Pascal case statement is like a C switch statement. Here's a simple one. Integer k is tested against the labels 5, 6 and 8. If it's equal to 5 or 6, then we execute a:=a+1. If it's equal to 8, then we execute b:=b+1. In any other case, we execute c:=c+1.
case k of 5,6: a:=a+1; 8: b:=b+1; otherwise c:=c+1; end;

And here's the equivalent assembler code:


cmp k,5 je L5 cmp k,6 je L6 cmp k,8 je L8 ; here's the "otherwise" case inc c ; do c:=c+1 jmp EndCase L5: L6: inc a ; do a:=a+1 jmp EndCase L8: inc b ; do b:=b+1 jmp EndCase EndCase:

Comment: This could be made more efficient by first loading k into the AX (or AL) register. Comparing a constant to a memory location requires fetching the memory value each time, while comparing a constant to a register value is much faster. Notice that after each incrementation, we need to unconditionally branch to the EndCase label, skipping over the other case possibilities. Each of the markers L5, L6, L8, EndCase can be any names whatever. However, MASM, like most assemblers, considers such marker names as global. If you once use a marker name, you can't reuse it. You'll also notice that the two markers L5 and L6 refer to the same location. There's no harm in this, and no loss in runtime efficiency.

Function Call and Return


The branch instructions aren't well suited to function calls. One could execute a jmp to branch to the first instruction of a function, and then another jmp later on to return. But this would only permit the function code to be executed from just one program location. We usually want to call a function from several different places. To do that, we need to keep track of the location of the function call so that we can later return to that location. The location of the function call in memory is called the return address. (More precisely, the return address is the code offset of the instruction that follows the function call.)

Appendix 2: The Intel 80x86 Microprocessor, page 491

Instruction Mnemonic call dest ret n

Purpose Push return address, then go to dest Pop return address ra and n bytes, then go to ra

The call instruction does that by first pushing the code location of the return address on the runtime stack, then branching to the destination address. When the function code is fully executed, we wish to return to the location just past the call instruction. This is easily done with the ret instruction. ret expects to find a return address on the top of the runtime stack, i.e. at SS:ESP. It pops the stack, then transfers control to this address, i.e. EIP is set to that address. The instruction following the call will then be the next one fetched and executed. The ret can take a non-negative constant integer n. When n is specified, the stack is popped by an additional n bytes after the return address is removed. This feature is useful in removing passed parameters from the stack upon the return. Note that n is in bytes, not words, despite the fact that push and pop work only with words.

Passing Parameters to a Function


A pure function call (no parameters) isn't very useful in a programming language. We'd like to pass parameters to a function, and also have it return a value. For example, consider the Pascal declaration
function factorial(n: integer): integer; begin if n <= 1 then factorial:=1 else factorial:= n*factorial(n-1); end;

This function accepts one integer parameter, and returns the factorial function n! = n*(n-1)*(n2)*...*2*1 through the use of recursion. There's no return statement in Pascal, as in C. Instead, the value is returned through an assignment statement, like this:
factorial:= something;

If parameter n is greater than 1, then it computes n*factorial(n-1). The result is returned by assigning it to the function name factorial. Here's a complete Pascal program using this function:
program TestFactorial; var k: integer; {declaring an integer variable} { declaring function factorial } function factorial(n: integer): integer; begin if n <= 1 then factorial:=1 else factorial:= n*factorial(n-1); end; begin { the main program writeln('factorial(1)= writeln('factorial(2)= writeln('factorial(3)= writeln('factorial(4)= writeln('factorial(5)= end. of ', ', ', ', ', TestFactorial } factorial(1)); factorial(2)); factorial(3)); factorial(4)); factorial(5));

Appendix 2: The Intel 80x86 Microprocessor, page 492

Functions Must Support Recursion


We want all the Pascal functions to support recursion. This means that: any function call should push a return address on the runtime stack. This is done through the call instruction. space for a return value must be provided on the stack. the actual parameters of the function (pass parameters) should be copied to the stack. any local variables of the function should be carried on the stack. during execution of the function body, we need an efficient way to access the passed parameters (now called the formal parameters), the return value and the local variables. We'll use register EBP for this purpose, since: o EBP refers by default to the stack segment o we can use offsets from the base EBP location to access the function variables through an address like this:
[ebp+16]

when we return from the function, all of the above must be unwound so that the stack is properly popped of its excess baggage. How is this achieved? Through some instructions before the call is done, and some instructions just after the call is done. We'll illustrate this with a call to function factorial given above, for example:
factorial(5)

Before the Function is Called


We need space for the return value. That's easy. The function returns a 4-byte integer, so we decrement ESP by 4:
sub mov push esp,4 eax,5 eax

Now we need to evaluate and push each of the parameters. There's only one, it's the "5", an integer: (This could be done on the Pentium with one instruction, push dword ptr 5).

Calling the Function


The function call pushes a return address (4 bytes) on the stack, and transfers control to the first instruction of the function:
call factorial

Starting the Function


To get the function going, we need to do all of this: save the previous value of EBP, which we do by pushing it on the stack,
push ebp

set EBP to point to its saved value. This will now permit us to access variables in the stack through offsets from EBP. We can easily do that by copying ESP to EBP:
ebp,esp

mov

allocate space for any local variables. (There aren't any in this example, so no more code is needed)

Appendix 2: The Intel 80x86 Microprocessor, page 493

Where are the Parameters?


Let's look at the stack situation as it exists just now. Here's what we know about it: Address as Name of parameter Bytes in How obtained [EBP+/- offset] parameter sub esp,4 [ebp+12] return value (integer) 4 mov eax,5 [ebp+8] n (integer) 4 [ebp+4] return address 4
push eax call factorial push ebp

[ebp+0] = [esp] previous EBP 4 The stack top ESP is the same as EBP at this moment. However, we can now push other bytes in the stack (changing ESP) without changing EBP. EBP now points to the "previous EBP" cell. [EBP+8] is how we address the passed parameter n. Before returning a value from the function, we'll assign it to [EBP+12].

How Should a Function's Value be Returned?


There are different ways of returning a function value: 1. Copy the value into a register just before returning, for example, into EAX. We can use a register for this because we are about to return from the function, not make another recursive call. 2. A floating point number could be left in the floating-point unit. More about that in the next section. 3. Leave the value on the stack top. You'll notice that we allocated the return value first, so that on return, it will be on the top of the stack for further use. 4. Leave the value in some other memory location. 5. Some combination of the above. Methods 1 and 2 are the most efficient, if either can be used. Integers, floats and pointers can be returned this way. No memory access is needed either before the return or after the return. We'll choose this one for our factorial example. Obviously, this is a compiler decision, and the compiler must be consistent about how function values are returned. Method 3 is less efficient, but has the advantage of unlimited space for a return value. Our function can return a large struct, string or set this way. You just need to allocate enough stack space to carry it all. Method 4 is even less efficient, since the return value must be copied somewhere else. However, we'll use this to support returned strings and sets in Pascal. The combination (5) is in fact how most compilers work. The choice of method of returning a value from a function will then depend on the type of the return value. The compiler will always know that type, and can both choose the best method and know the number of bytes that must be supported. In this example, we'll use method 1, returning an integer value in register EAX.

The Function Body


Here's the function body as expressed in Pascal:
if n <= 1 then factorial:=1 else factorial:= n*factorial(n-1);

It starts by comparing n to 1. We know how to do that in assembler:

Appendix 2: The Intel 80x86 Microprocessor, page 494

cmp

dword ptr[ebp+8],1

The comparison instruction cmp sets the zero and sign flag bits in the FLAGS register. We can use these flags to branch to either the then clause or the else clause. We also need two label markers for these clauses:
jg mov jmp L_006: sub mov dec push call imul mov L_007: L_006 ; jump to the else clause on >, else fall through dword ptr [ebp+12],1 ; factorial:=1 L_007 esp,4 ; start another call to factorial eax,[ebp+8] ; here's n eax eax ; pushing the parameter value n-1 factorial ; call the function dword ptr [ebp+8] [ebp+12],eax ; the value is returned in eax ; ...so we set the return value cell [ebp+12]

Returning From the Function


The body of the function is now finished. The evaluation has also set the return value, at [ebp+12]. We now need to do this to wrap things up and return from the function: copy the return value to register EAX
mov eax,[ebp+12]

release any local variables from the stack. There aren't any, so we don't need any instructions for this. restore EBP to its previous value
ebp

pop

return from the function and pop the stack, effectively removing the parameter n and the return value.
8

ret

The ret 8 not only pops the stack, fetching the return address from it, but also release 8 more bytes. This restores the stack to the condition it was in when the function call sequence was begun. And, incidentally, the value computed by the function is in register EAX. So, after the function is called, we can use EAX in some additional computations, or assign it to a variable.

Function Header and Trailer Directives


The assembler likes to know which blocks of instruction code correspond to a function or a procedure. This helps it design an appropriate call and return instruction regarding the function, and also helps the assembly code writer avoid some silly mistakes, such as branching into or out of a function block. And, of course, we need to tell the assembler where the function code begins, i.e. name the function through a label marker. Assembler function blocks are not nested. You have to line them up one after the other. But here are some useful directives (Masm conventions) name proc near ; use this to open a function block name endp ; use this to end a function block public name:near ; use this to be able to access the function from an external program The "near" attribute (in protected mode) says that the function can be called with a 32-bit address

Appendix 2: The Intel 80x86 Microprocessor, page 495

offset, using the current code segment. This is usually good enough for ordinary user programs. The alternative, far, provides a 32-bit address offset plus a 16-bit code segment value (6 bytes in all), supporting very large programs. The far option, for most computers, would only appear in operatingsystems code. Using these, here's how the factorial function would appear in Masm format:
factorial proc near push ebp ; save ebp mov ebp,esp ; set ebp to the local stack environment ; start the function body here cmp dword ptr[ebp+8],1 ; compare n to 1 jg L_006 ; jump to the else clause on >, else fall through mov dword ptr[ebp+12],1 ; factorial:=1 jmp L_007 L_006: sub esp,4 ; start another call to factorial mov eax,[ebp+8] ; here's n dec eax push eax ; pushing the parameter value n-1 call factorial ; call the function imul [ebp+8] ; multiply eax by n mov [ebp+12],eax ; the value is returned in eax ; ...so we set the return value cell [ebp+12] L_007: ; this is the end of the function body, prepare to return mov eax,[ebp+12] ; set the return value pop ebp ; restore ebp ret 8 ; return, popping 8 more bytes factorial endp

The Gnu AS Assembler


This assembler [7, 8] was designed by the Free Software Foundation project as a low-level support platform intended for any commercial computer system. Its design was aimed at configuration and portability, not friendliness. As such, it treats every computer as some variation of a Von Neumann fetch-execute system. It was also designed to support the configurable gcc compiler by providing a relatively platform-independent way of generating high-performance executable code through stable, well-defined directives. The AS assembler was designed to work in conjunction with the Gnu compiler gcc, not for direct use by programmers. As such, you can expect some surprises when you try it on handwritten assembly code. I found it would crash silently on many seemingly tiny syntax problems, such as an unknown instruction or invalid tag on an instruction mnemonic. Use AS with great caution. AS uses the so-called AT&T assembler notation rather than Masm notation. Here are the salient differences: Registers have the same names, but must be preceded by %. Thus %eax refers to the Masm eax register. Constants in instructions must be preceded by $, and have the form of a simple hex or octal number. Thus $0xFFFF represents a 16-bit constant containing all 1's. There is no special representation for floating-point numbers. You (or a compiler) must separately convert a float to its equivalent hex form. The order of operands in two-operand instructions is reversed. Thus the Masm

Appendix 2: The Intel 80x86 Microprocessor, page 496

mov ax,bx becomes movw %bx,%ax in AS.

The size of the operand must usually be attached to the instruction mnemonic: b for byte, w for word, l for long (32-bits), and q for quad (64-bits). For floats, s means 32-bit float and l refers to a 64-bit float. Variable names require no special prefix, and refer to a memory location or a macro. The notation for bracketed registers, i.e. [ebx+15] is more restricted than in Masm, and uses parentheses instead. Thus [ebx+15] becomes (%ebx+15). The alternative Masm form 15[ebx] must be written (%ebx+15). A macro processor is built in, and follows the Unix m4 notation. Variables and blocks of variables have special directives. The headers and certain other details are different and require attention. For more details, the pascal5 student compiler can generate either Masm or AT&T syntax assembly code. Use option -G to generate AT&T syntax, and no option for Masm syntax. The Pascal programs in directory pasprogs can all be assembled, linked and executed in either mode. By using this compiler, you can view Masm and AT&T notation side-by-side and learn the differences.

Example
Here are portions of the program containing factorial (review the Masm description given above) as generated by the pascal5 compiler with option -G:
# Pascal program TESTFACTORIAL .ident "Qparser: pascal5 compiler version V16.1" .include "aservice.s" .text .data K_038: .long 0 # FACTORIAL_039 EQU [ebp+12] # N_040 EQU [ebp+8] .text .align 4 .type FACTORIAL_041,@function FACTORIAL_041: pushl %ebp movl %esp,%ebp pushl stlink+8 movl %ebp,stlink+8 cmpl $1,8(%ebp) jg L1 movl $1,12(%ebp) jmp L2 .p2align 4,,7 L1: subl $4,%esp movl $1,%eax subl 8(%ebp),%eax negl %eax pushl %eax call FACTORIAL_041 imull 8(%ebp) movl %eax,12(%ebp) .p2align 4,,7 L2:

Appendix 2: The Intel 80x86 Microprocessor, page 497

movl popl movl popl ret

12(%ebp),%eax stlink+8 %ebp,%esp %ebp $8

# ----- SYMBOL TABLE ----# N INTEGER .globl pasMain .align 4 .type pasMain,@function pasMain: pushl %ebp movl %esp,%ebp pushl %ebx pushl stlink+4 movl %ds,%eax movl %eax,%es pushl $1 .data L3: .byte 14 .asciz "factorial(1)= " .text lea L3,%ebx call pushString pushl $0 writesA addl $8,%esp pushl $1 subl $4,%esp pushl $1 call FACTORIAL_041 pushl %eax pushl $0 writeiA addl $12,%esp pushl $1 writelnA addl $4,%esp pushl $1 .data L4: .byte 14 .asciz "factorial(2)= " .text (portions omitted here) pushl $1 writelnA addl $4,%esp popl stlink+4 popl %ebx movl %ebp,%esp popl %ebp ret # ----- SYMBOL TABLE ----# ABS PROCEDURE # ARCTAN PROCEDURE

Appendix 2: The Intel 80x86 Microprocessor, page 498

(etc.) .end

References
[1] The 8086/8088 User's Manual, Intel order number 240487-001, published 1989. [2] Ytha Yu and Charles Marut, Assembly Language Programming and Organization of the IBM PC, McGraw-Hill, 1992. [3] Richard C. Detmer, 80x86 Assembly Language and Computer Architecture, Jones and Bartlett, 2001. [4] Microsoft MASM assembler. This is provided on the CDROM accompanying this publication, through special permission of Microsoft Corporation. It's compatible with Microsoft's Visual Studio. Specific instructions are provided with the laboratory materials. [5] Microsoft C/C++ compiler, provided with Visual Studio. See the laboratory materials for special instructions. [6] Microsoft CodeView debugger. This is part of the Visual Studio framework. Masm can be incorporated into this framework, making efficient debugging of assembly possible. [7] Gnu AS assembler. Provided with the Linux operating system, and also with other platforms that the Gnu compilers support. AS is actually a collection of assemblers, configured for a particular one through the Gnu compiler setup. Details can be found by running info as on anyLinux platform. [8] Gnu gcc compiler. Provided with Linux, gcc is also a collection of C compilers, configured for a particular platform through an installation process. Details can be found by running info gcc on any Linux platform. [9] nasm assembler. Available through the web. Try "nasm" in almost any computer-oriented search engine.

Appendix 2: The Intel 80x86 Microprocessor, page 499

Appendix 3. The Intel FPU


napp3.doc, vs. 1

Introduction
All 80x86 floating-point operations, starting with the 8086, are handled through a set of special instructions. (All the floating-point instruction mnemonics start with the letter f, i.e. fadd, fmul, fcmp, etc.) In the past, these instructions were either interpreted directly by a separate hardware chip (the 8087) or were emulated in software. Early 8086, 286, 386 and 486 processors had no built-in FPU chip, but could work in parallel with a separate 8087 chip. With the 586, or Pentium, processor, the FPU was incorporated in the CPU chip, so that the separate 8087 is no longer required. A floating-point instruction was considered an unimplemented instruction on the early 8086 CPUs. When such an instruction was encountered in the chip's instruction stream at runtime, an interrupt service request was automatically routed to the DOS operating system. With no 8087, DOS was expected to identify the instruction and its operands, and perform the required service in software. With an 8087, the trap is replaced during execution by a wait instruction, and the floating-point instruction is picked up and executed by the 8087. When the 8087 is finished with the instruction, control is returned to the 8086 CPU. Software implementation of floating-point operations is typically very time-consuming. For example, a floating-point addition might require 2000 instruction cycles to perform in software emulation mode. Nevertheless, programs with floating-point could execute correctly, albeit slowly. With an 8087, a floating-point addition might require only 10-20 cycles to complete. Both the 8086 and the 8087 can access the address and data bus for memory operations. However, neither had direct access to each other's registers. Thus memory became a primary means of communication between the two processors, as we'll see. There's no mechanism for the 8087 to load (say) a 32-bit floating value from register EAX in the 80x86. The value must first be copied to memory, then the 8087 can be instructed to copy that memory value to its internal registers. Similarly, the result of a floating-point operator, including any comparison and overflow flags, must be transferred to memory before being loaded into an 80x86 register. Most of these implementation details were improved with the Pentium-class processors. Since the FPU was always present, it could be better integrated into other processor operations, eliminating the need for instruction traps and special DOS floating-point interrupt handling. The movement of data through memory was also improved through the use of an onboard cache, reducing the need to actually perform a read/write cycle in external RAM. The cycle time of the Intel processors has continuously improved, almost exponentially over time, further increasing the performance of both the CPU and the FPU. Nevertheless, the instruction definitions have remained intact. The significant achievement made by Intel was in maintaining a uniform instruction-level environment, such that the same floating-point code could be executed on a Pentium and on an 8086 with or without a companion 8087. This means that floating-point code will run on any of the Intel platforms, from a simple 8086 with software emulation to the most advanced Pentium. It also means that very old software can often execute correctly on a modern platform.

Integer Numbers vs. Floating Point Numbers


So far in this chapter, we've discussed operations on integers. An integer is a number with no Appendix 3. The Intel FPU, page 500

fractional part and no exponential factor. We consider 1, 2, 3, etc. as integers, but not 1.677 or 15/16 or 5x10-5. An integer is represented in binary form in a 16- or 32-bit register by considering each of the bits as carrying a value equal to 2 raised to a certain power, the power being the bit position in the register. Thus bit 0, the so-called least significant bit, carries value 1 (2^0) if it's set and 0 otherwise. Bit 1 carries value 2 (2^1) if it's set and 0 otherwise. Bit 2 carries value 4 (2^2) if it's set and 0 otherwise. In general, bit n carries value 2^n if it's set and 0 otherwise. If we let the value of bit n in a 16-bit register be represented by bn, (and this is either 0 or 1), then we can describe the integer value represented by the register's bits as value= b01 + b12 + b24 + b38 + ..+. b142^14 + b152^15 This is merely an interpretation of the bit patterns in the register. Given 16 bits in a register, it can carry 2^16, or 65,536 possible combinations of bits. How a particular bit combination is interpreted at runtime depends entirely on the operations applied to that combination. So this particular interpretation (as an unsigned cardinal number) is only one of several possible interpretations. There are many other possible interpretations. The bit corresponding to b0 could be the leftmost bit, bit 15, b1 could be bit 14, and so forth. (It is in some architectures). It could be any bit, for that matter, provided that the other fifteen bits are assigned unique locations in the register. As weve seen, a 16 bit field might be interpreted as a twos-complement number rather than an unsigned number. In that case, the value is worked out by this algorithm:
if (b15 == 1) { reverse all bits; add 1; set sign= -; } else set sign= +; value= (sign)(magnitude of bits b14..b0);

A 16 bit field might be interpreted as two ASCII characters. Since ASCII requires at least 7 bits per character, not all the bits are needed. On the other hand, if we represent numbers as ASCII digits, then a 16-bit field is really only good for the numbers 00 through 99. Number 100 requires 3 ASCII characters, and there's not enough room for that. A 16 bit field might be interpreted as four binary-coded digits (BCD). A BCD digit uses four bits to represent a single decimal number, where decimal 0 is 0000, decimal 1 is 0001, decimal 5 is 0101, and decimal 9 is 1001. There are six more combinations possible, which aren't used. In this way, a 16 bit field can represent numbers from 0 to 9,999. Since we are using fewer combinations of bits in BCD than in binary, the maximum size of the numbers represented is smaller, 10,000 for BCD vs. 65,536 for binary. Despite that limitation, the Intel 80x86 instruction set in fact supports an alternative form of integer arithmetic for BCD numbers. BCD is used in accounting and financial software.

Interpretation is a Function of the Instruction


The bit pattern interpretation is carried in the instructions that operate on the registers, not in the registers themselves. Thus when you choose to add two numbers with the add instruction, the integer interpretation is used by the microprocessor to carry out the arithmetic. The processor essentially adds the two least significant bits to form a sum and a carry bit. The carry bit is 0 if the two bits are (0, 0), (0,

Appendix 3. The Intel FPU, page 501

1) or (1, 0). The carry bit is 1 for the remaining case (1, 1). The carry bit is then added to the next most significant bits, bit 1 of the two registers. Three bits added together (the two bits plus a preceding carry) also yields a sum bit and a carry bit. Note that 1+1+1 = 3 is the largest possible such sum, and 3 can be represented by two bits. This process is repeated across the two registers, from the least to the most significant bits, to form a 16-bit sum. At bit 15, the most significant bit, another carry of 1 may be generated. This indicates that the sum of the two numbers is too large for a 16-bit register, that is, the sum is greater than 65,535. That carry bit is sent to the carry flag in the flags register. The logic for all these operations can be found in the arithmetic-logic unit, or ALU, built into virtually every processor made. Addition is such a basic operation that every CPU manufacturer prefers to build these operations into the hardware of the chip rather than rely on some higher-level software to carry them out. That provides high performance.

Representing Larger and Smaller Numbers


The astute reader will appreciate that numbers between 0 and 65,535, or even 0 - 4,294,967,295, while useful for many computations, are not particularly useful for scientific work. Values found in the real physical or chemical world have far more extreme magnitudes. An atom has a width that can be expressed in meters; it's approximately 10^-13, or 0.00000000000001 meters. That's a fraction, and it has no counterpart among our integers. The nearest star, next to the Sun, is about 4 light-years away. There are 31,536,000 seconds in a year, and light travels at 186,000 miles per second. So that star is about 2.35x10^13 miles away. That's too big for a 32-bit integer register. We'd like computers to perform arithmetic on such small and large numbers, so we obviously need a different way of representing numbers than our simple binary integer form. That form is the floatingpoint number.

IEEE Floating-point Standard


The IEEE 32-bit floating-point standard is used by most C compilers as the standard type float. There's also a 64-bit standard used to support the C type double. The 64-bit standard provides a larger range of magnitudes and higher precision. A floating-point number carries a number as three separate fields, a sign bit, an exponent field and a mantissa field. The exponent part is used as a power of 2 (or 16 in some systems), and the mantissa is used as a magnitude. The sign bit is set for a negative number and clear for a positive number. In particular, the 32-bit IEEE standard allocates bits like this: bit index use 31 sign 30..23 exponent E 22..0 mantissa M Thus we have 23 bits allocated to the mantissa and 8 bits to the exponent. One bit is used for the algebraic sign of the number. If all the bits are 0, then we have a number numerically equal to zero. The exponent bits, if regarded as a number, are the actual power-of-two, offset by +127. Thus if the exponent E is 130, then the exponent value of the number is 2^(130-127) or 8. In general, the exponent value is 2^(E-127), where E is the integer value of the exponent part, treated as an unsigned 8-bit binary number. The mantissa bits (23 in all) form the last 23 bits of a 24-bit mantissa. The leading bit is always 1, which is not part of the mantissa field M. The extended mantissa (24 bits) is regarded as a number Appendix 3. The Intel FPU, page 502

greater than or equal to 1, but less than 2, by attaching a binary point just behind the leading 1 bit. So if the mantissa M were 0000...000, its value is exactly 1.0. Here's how that's obtained: attach the (hidden) 1 bit in front of M: 1000...0000. Then the value of the mantissa is 1/1 + 0/2 + 0/4 + ...., which is 1.0. If the mantissa M were 11000...000, then its value is 1.11b (binary), or 1+1/2 + 1/4 = 1 3/4, or 1.75 decimal. Since there are 24 bits in this fractional value, the least bit corresponds to 2^-23, or 29.8x10-9. This is the smallest interval possible in the mantissa, so we say that the precision of this floating-point number form is approximately 1 part in 8,000,000. This precision of representation is approximately the same whether the number is very large or very small. It corresponds to the precision of a measurement as understood by a physicist. Given an instrument capable of measuring the diameter of the earth (about 7900 miles) to this level of precision, then the earths diameter would be known to within 5 feet. The same precision would apply to an instrument capable of measuring the length of a meter stick, except that this time, the length would be known to within 1/8 micron, which is less than the wavelength of green light. That's only singleprecision floating point. Double-precision floating point provides considerably more precision. In general, consider the mantissa M to be an unsigned binary number, then the mantissa value is 1+M*(2^-23). We obtain the arithmetic value of a floating-point number by multiplying the exponent value by the mantissa value. Thus if the exponent E is 130, the exponent value is 8. If the mantissa M is 11000..0000, its value is 1 3/4. Thus the value of the floating-point number is 8 * 1 3/4, or 14. It happens that this value is exactly an integer, however, if the mantissa M were 111100..000, or 1 15/16 instead, then the value of the number is 8 * 1 15/16, or 15.5. Since the largest exponent is 255 (the largest possible 8-bit number), then the largest exponent part is 128, corresponding to a multiple of 3.4 x 10^38. The largest possible value for a 32-bit float is then 6.8 x 10^38, since the largest mantissa is just a shade less than 2. The smallest possible exponent is 0, yielding an exponent part of 2^-127, or 5.88x10^-39. Since the smallest mantissa is 1, the smallest float (other than 0) is 5.88x10^-39. The IEEE 64-bit floating-point standard provides more precision and a wider range of exponents. It corresponds to the C standard type double. As we'll see, the Intel FPU actually supports an even larger floating-point standard, one that supports 32-bit integer arithmetic accurately as well as IEEE 64-bit floating-point.

Infinity and Indefinite Numbers


Special floating-point forms are provided for infinity and indefinite numbers. For example, division of 1.0 by 0.0 will generate a positive infinity, which has a special floating-point number form. That division will also set the FPU status flag bits in a special way. Division of 0.0 by 0.0 generates an indefinite form, "not a number", or NAN for short. The C printf function recognizes these forms and will generate a short string that describes it, rather than a number. Here's a short program that generates and prints these number forms:
#include <stdlib.h> #include <stdio.h> int main(void) { float f1= 1.0F, f0= 0.0F; printf("+ infinity: %f\n", f1/f0);

Appendix 3. The Intel FPU, page 503

printf("- infinity: %f\n", (-f1)/f0); printf("indefinite: %f\n", f0/f0); return 1; }

Here's what Visual C++ vs. 5.0 generates when the above is compiled and executed:
+ infinity: 1.#INF00 - infinity: -1.#INF00 indefinite: -1.#IND00

Floating Point Arithmetic


Common arithmetic on floating-point numbers is rather complicated. Here's a sketch of how they are done. Sign change. Changing the sign of a float is easy--just reverse the sign bit. Addition. Adding two floats requires that the smaller one be expressed in such a way that the two exponents are equal. Thus if the exponents are 15 and 18 (after removing the offset), then the mantissa of the first number must be shifted to the right 3 places, corresponding to a division by 2^3. (The hidden leading 1 bit must be considered and shifted as well). Its exponent can then be increased by 3 to compensate, making the two exponents the same. The mantissas can then added (or subtracted, depending on their sign bits). The resulting number has the same exponent as before, but it must now be normalized. The most significant bit may or may not be 1, yet it must be to conform to the standard. It's also possible that the sum has produced a non-zero carry. If a carry has occurred, then we shift the result right by one and increase the exponent by one to compensate, to get a valid mantissa. If a carry didn't occur, but the leading bit is not a 1, then it must be shifted left by enough places to yield a one. The number of shifts must be subtracted from the exponent of the result to compensate. Subtraction. Subtraction is essentially addition with the sign bit of the second number reversed. Multiplication. Here, the two mantissas can be multiplied as binary numbers, yielding of course, a much larger number of bits. The result must be normalized by a shift, since the product may be between 1 and 4. The exponent parts are added, after removing their offset, since multiplying two exponentials 2^r 2^s is equivalent to adding their exponents: 2^(r+s). Any bits beyond the 24-bit capacity can be used to round the least significant bit, then discarded. Overflow will occur if the resulting exponent exceeds the maximum exponent size for this float. Division. Division is begun by subtracting the second exponent from the first. An integer division of the mantissas must then be carried out. The result may be an overflow, or divide-by-zero. Note that if since both mantissas are between 1 and 2, then their quotient must be between 1/2 and 2. In any case, the resulting mantissa quotient must be normalized to find the resulting exponent part of the result. This amounts to shifting the quotient's mantissa left by 1 if it's less than 1, then subtracting 1 from the quotient's exponent.

Organization of the Intel FPU


Floating-point and integer arithmetic in the Intel FPU is performed internally on 80-bit words. There are eight such 80-bit registers, and they are organized as a push-down stack. Each register carries a 64 bit mantissa, a 15 bit exponent and a 1 bit sign. In addition to providing sufficient accuracy for doubleprecision (8 byte) floating-point operations, the 64 bit mantissa provides accurate integer arithmetic for all integers up to 8 bytes. The eight register locations can be addressed directly in Masm as sp(0), sp(1), ... sp(7).

Appendix 3. The Intel FPU, page 504

In general, the FPU is organized as a stack machine. To perform any operation, one or two numbers must first be pushed into the stack, then the operation is performed on the stack top. Other instructions permit popping the stack top. (Some recent extensions to the FPU instruction set permit an operand to appear with an arithmetic instruction. However, we'll use the FPU as a stack machine). The stack top register is sp(0), the one just under it is sp(1), etc. A stack push operation discards sp(7), copies sp(6) to sp(7), copies sp(5) to sp(6), etc. The new entry is copied into sp(0). A stack pop operation operates in the reverse order. This may seem very time-consuming, but a full set of register transfers can be done in parallel in one machine cycle. All operations involving a transfer between the FPU and the CPU must be through memory. There are no operations for directly transferring a value from a CPU register to the FPU stack. However, the FPU memory operations include all the 80x86 memory access modes. These are provided through 80x86 address calculation services, which can be used in the FPU. Note that while the FPU can't directly access the CPU registers EBX, EBP, etc., it can take advantage of the addressing capabilities of the CPU in determining the physical memory address of an instruction. So an addressing mode like [EBX+87] is acceptable in an FPU instruction. Since the FPU operates efficiently as a stack machine, very little optimization is needed in a compiler, other than constant folding. When a floating operation is required, the compiler merely generates instructions to push the values into the FPU stack, then later execute an appropriate stack operation. When the value must be assigned to a memory location, this can be done with one or two instructions. However, the compiler should keep track of how many numbers have been pushed in the FPU stack; if this exceeds 8, then it must take special action to extend the FPU stack or declare an error. All four arithmetic operations are supported in both floating-point and integer form. In addition, the 8087 and its emulators provide fast evaluation of several trigonometric, logarithmic, exponential and hyperbolic functions. Square root is rapidly calculated, for example, through a single instruction, fsqrt. Instructions for converting an integer to a float and vice versa are also provided. Other FPU instruction modes are provided in the Pentium processors. The arithmetic instructions can take one or two variables as operands. Numbers in the stack can also be manipulated. For details, refer to the Intel documentation.

Loading a Float from Memory


The instruction fld expects a 32- or 64-bit integer in memory (recall that Masm expects to see a declaration for the variable's name), converts it to the 80-bit internal floating-point, and pushes it in the FPU stack.

Storing a Float to Memory


Instruction fstp stores the float on the stack top to memory (32- or 64-bit form), then pops the stack. fst does the same, except the stack is not popped.

Floating Arithmetic
Although there are many alternative instruction forms provided in the Pentium series, we will use just the stack-operation form, as follows: fadd (no operands): adds the two numbers in the stack top, popping them, then pushing the result. fsub (no operands): subtracts the two numbers in the stack top, popping them, then pushing the result. Under Masm, this operates in reverse: to get a-b, you need to push b, then a, then fsub.

Appendix 3. The Intel FPU, page 505

fmul (no operands): multiplies the two numbers in the stack top, popping them, then pushing the result. fdiv (no operands): divides the two numbers in the stack top, popping them, then pushing the result. Like fsub, this operates in reverse. fcompp (no operands): this expects two numbers in the stack top. It compares them, setting some FPU internal flags, then pops both of them. There are extended forms of these instructions that take various operands. We wont use them.

Conversion of Integer to Float


The Intel chip provides an instruction that converts from a 32-bit integer form to a floating-point form. This is fild. This instruction expects a memory operand, a 32-bit signed integer. It converts the integer to floating form, pushing the result onto the FPU number stack. This conversion is relatively easy for the processor. The integer is first loaded into the 64-bit mantissa register in the FPU, and the exponent is set to +64, corresponding to the value of the integer. It's then normalized by shifting left until the most-significant bit of the mantissa is a 1. That's lopped off to form an IEEE standard number. Note that the 64-bit internal mantissa can carry even a 64-bit quad integer accurately, with no loss of precision.

Conversion of Float to Integer


In order to convert a float to an integer, the fractional part must be dropped. So the question is whether the number should be rounded (find the integer whose value is nearest the float) or just truncated (find the largest integer less than or equal to the float). Both instructions are provided in the FPU, but the result is still essentially a floating-point number, albeit one with a zero fractional part. The instruction fistp stores the float into a 32-bit or 64-bit memory location by rounding, then pops the FPU stack. If this is stored in memory through fstp, the resulting float is easily converted to an integer by extracting the mantissa, then shifting it right by a number of places indicated by the exponent.

Conversion of a Float into ASCII form


This requires a software program, since there are numerous ways of representing a floating-point number as an ASCII string. Few computers support this operation in hardware. However, the software can make use of the FPU arithmetic operators. The principal task in conversion to a decimal form is finding an appropriate power of ten so that the float can be described as V x 10n, where n is some integer, and V is between 0.1 and 1.0. This is usually done by multiplying or dividing the float by some power of 10 until its value is between 0.1 and 1.0. The resulting fractional part V can then be multiplied by some power of 10 to yield an integer M. By writing M out as a decimal integer, fixing a decimal point to it, along with a power of ten expressed by the exponent part, we can obtain a suitable ASCII decimal representation of the number. This algorithm is essentially written into the C printf function, which provides a variety of ways of expressing a float.

Conversion of an ASCII Number to a Float


This is relatively easy, and usually done through software. The problem is one of finding the ASCII characters representing the mantissa part of the number, for example,
15.67281

Appendix 3. The Intel FPU, page 506

As these are scanned, an equivalent floating-point number can be formed. When the decimal point is seen in the input characters, the fractional part can be formed. Then the ASCII characters representing a power of 10, for example,
E-13

can be translated into the appropriate power of ten. More floating-point operations result in the final internal number. The Qparser lexical analyzer contains a function that converts strings into doubles. You can find it in file lib/lexf.cpp.

Choice of Floating-point or Integer Operations


The FPU can be used for integer operations, though most compilers prefer the CPU for these. The FPU operates on all numbers in floating-point form. However, the large mantissa precision means that it can also effectively perform integer additions, subtractions and multiplications in floating form without losing a single bit. Division is a different story. The result of a floating division will almost always produce a fractional part, requiring a decision whether to round or truncate. If the remainder is required, that requires a second multiplication, of the divisor by the remainder, to yield an integer remainder. The result may not always agree with a true integer division since the fractional parts are always rounded to yield the highest precision floating-point result. Whether integer arithmetic performance is higher with the FPU or the CPU is problematic. We've seen that a number transfer from a CPU register to an FPU register (or vice versa) must be through some memory address. This would seem to imply many machine cycles lost to memory access. However, in most modern Pentiums, memory is cached, meaning that the most frequently used memory cells are carried in fast electronic form on the chip. Thus if one memory location is frequently used for CPUFPU transfers, the transfer will likely be very fast, bypassing the actual RAM transfer mechanism.

Transferring Floating-point Status to the CPU


Some advanced Pentiums provide a special instruction that transfers some of the (many) floatingpoint status bits to the CPU in one instruction. This is not provided on all Pentiums (yet), so we instead use the instruction fstsw, which writes a 16-bit word to memory containing the FPU status bits. These don't correspond to the CPU status bits, and require several instructions to extract such flags as the sign bit and zero flags, writing them to the CPU flags register. For more details, see the example Comparing Two Floats below.

Summary of Basic FPU Instructions


The Intel FPU has a very rich set of instructions. For a complete discussion, the reader should refer to an Intel reference manual [1]. The following are the subset instructions used in the student Pascal compilers. fild: copy the integer doubleword or quadword from memory, pushing it as a float into the FPU stack. fld: copy the floating-point float or double from memory, pushing it into the FPU stack. fistp: pop the FPU stack, copying the result as an integer to the memory location. The conversion is by rounding to the nearest integer value. fstp: pop the FPU stack, copying the result as a float or double to the memory location.

Appendix 3. The Intel FPU, page 507

fadd: add the two floats on the FPU stack top, popping them, then pushing the result. fsub: subtract the two floats on the FPU stack top, popping them, then pushing the result fmul: multiply the two floats on the FPU stack top, popping them, then pushing the result fdiv: divide the two floats on the FPU stack top, popping them, then pushing the result. If a division by zero occurs, a CPU trap results. fsqrt: compute the square root of the float on the FPU stack top, replacing it with the result. fabs: replace the float on the FPU stack top by its absolute value. (Set the sign bit to 0) fchs: reverse the sign of the float on the FPU stack top. (negate the number) frndint: round the stack top element, replacing it with the integer result. fstsw: write the 16-bit status word of the FPU to a memory destination. This will contain the result of a comparison or arithmetic operation. fcompp: compare the two floats on the FPU stack top, popping both. The result sets flags in the FPU status register. fwait: (a CPU operation). This causes the CPU to wait (if necessary) until the FPU has completed an operation. It's needed to avoid a race condition between the two processors, in particular when the FPU is expected to write a result to memory and the CPU pick up that result. Without an fwait, the CPU may access the memory operand before the FPU has written to it.

Let's now demonstrate some of the operations as used in the student Pascal compilers.

Example 1. Floating Multiply and Saving the Result


We have two floating-point numbers as variables r1 and r2. We wish to find their product and store the result in r2.
; r2 := r1*r2 fld fld fmul fstp stack R1 R2 R2 ; a Pascal source line ; load and push a float ; multiply stack top pair, replaced by product ; write the result to memory, popping the FPU

This follows the pattern of first loading the two floats into the FPU stack, performing the multiplication, then storing the result in the destination memory address. The multiplication fmul expects two numbers on the stack top. Both are popped and replaced by their product. The other binary arithmetic operations follow the same pattern.

Example 2. Loading a 32-bit Integer in Memory to the FPU


This is easy: the instruction fild performs exactly that operation

Example 3. Loading a 16-bit Integer in Memory to the FPU


Converting a 16-bit integer in register AX to a float, and pushing the float into the FPU stack requires several instructions. We therefore have embedded this unit operation in an assembler function, int2float. (These instructions could also be generated inline by the compiler, or generated inline through a macro call). Here's what int2float looks like:
; the following function gets a 16-bit integer in AX loaded as a float ; through a temporary memory location RTMP

Appendix 3. The Intel FPU, page 508

int2float cwde mov fild ret int2float ENDP

PROC

NEAR

DWORD PTR RTMP,eax RTMP

The cwde instruction converts the word in AX to an integer double word in EAX, preserving the sign. These register values are then copied to a temporary memory location RTMP. The instruction fild reads the doubleword in RTMP, converts it to floating-point, then pushes it into the FPU stack. It turns out that we only need one temporary location RTMP, for all such FPU-CPU memory transfers. That single transfer address will most likely be in the 80x86 cache at runtime, causing these operations to move with no real RAM transfers at all. NOTE: It may be better to replace the int2float subroutine with a macro. Also, if integers are carried as 32-bit words in the first place, the cwde is unnecessary.

Example 4. Floating Add Instruction Sequence with Integer Conversion


Here is a composite of several unit operations. A real number r1 is to be added to an integer i1 (32bit), and the result copied to a memory location carrying a real number:
; r2 := r1 + i1; fld R1 fild I1 fadd fstp R2 ; push r1 into the FPU stack ; push i1 into the FPU stack ; perform the floating addition on FPU stack ; store the floating result in r2, ; popping the FPU stack

Example 5. Loading a Constant


The problem here is to transfer a floating-point constant to a float variable, i.e. copy the constant to memory. There are several ways to carry this out, but all of them at some point require the compiler or assembler to construct the internal binary form of the floating-point constant. We've chosen to have our student Pascal compiler form the constant. How this is done can be seen in function real2hex, found in file lib/codegen.cpp. Unfortunately, our simple-minded solution can only be guaranteed to work on an Intel host processor, i.e. the student compiler should be compiled and executed on an Intel platform. For a more portable host platform, it would be better to have the 80x86 assembler perform the constant conversion. The floating-point number 5.0 looks like this in hexadecimal, as a 32-bit double word:
040a00000h

This can be transferred to variable r2 with one mov instruction, like this:
; r2 := 5; mov DWORD PTR R2,040a00000h

Note that this does not involve the FPU at all. Also note that the compiler has applied an implicit cast to the integer constant 5, making it 5.0 effectively. Finally, note that the instructions satisfy the littleendian byte-ordering convention of the Intel target platform.

Example 6. Comparing Two Floats


Two floats in the FPU stack can be compared and popped, like this:

Appendix 3. The Intel FPU, page 509

if r1 > r2 then writeln('OK 1') else writeln('NOT ok 1'); fld R2 fld R1 call fcompare jle $L_30 ; etc.

This example can be found in file pasprogs/t15.pas. The fcompare function has several instructions designed to perform the fcompp instruction, then transfer the result flags to the CPU's flags. Here's fcompare, which can be found in lib/aservice.asm:
fcompare PROC NEAR fcompp ; floating point compare fstsw WORD PTR RTMP ; get the FPU status word fwait mov ah,BYTE PTR RTMP+1 ; pick up bits 8..15 ; here, bit 0 (C0) must go to bit 7 ; and bit 6 (C3) must go to bit 6 mov al,ah and ah,040h ; mask C3 bit, ZF bit and al,1 ; mask C0 bit, ~SF bit ror al,1 ; move C0 to bit 7 or ah,al ; transfer the SF flag sahf ; transfer result to CPU flags ret fcompare ENDP

Notice that fcompp is done first. This compares two floats in the FPU stack, popping both. The result is left in the FPU status register in the form of two bits called C0 and C3. (Other conditions, such as infinity, not-a-number, etc. are in other bits, and are ignored by this function). The CPU flags register is not affected by fcompp, however, the C0 bit is equivalent to the SF bit, and the C3 bit is equivalent to the ZF bit, provided that the comparison a:b is done by first pushing b, then a. We first transfer the status word to RTMP with the FPU instruction fstsw, then wait for the transfer to complete. Bits 8 to 15 of the status word contain C0 and C3, but not in the same positions as in the CPU flags register. Five instructions are needed to get these into register AH in the correct position, then sahf transfers AH to the flags register. At this point, the result of the floating-point comparison is logically equivalent to an integer comparison, cmp. Although this is a messy operation, on most Pentium systems the time required is still a fraction of the time required for the floating-point comparison itself. Intel has recognized this problem. Some advanced Pentium processors provide a special instruction that transfers the appropriate FPU flag bits to the CPU flag, obviating the use of the above function.

Example 7. Initializing the Floating-point Coprocessor


The FPU can be configured in a variety of ways. The details are beyond the scope of this paper. Our student compilers will be called from a C program, which attends to most of the configuration details.

Appendix 3. The Intel FPU, page 510

Appendix 4: A Pascal Grammar


W. A. Barrett, San Jose State University napp4.doc

Introduction
Jensen & Wirth Pascal[1] can be expressed as an LR grammar which we will use to develop a more complete Pascal compiler. Pascal is usually expressed in the form of syntax diagrams, but it is as easily expressed in the form of a production rule-based grammar. A complete grammar pascal.grm may be found in directory qparser\pascal5, along with the usual makefile. This is a complete Jensen & Wirth compiler, lacking only a few features, as explained toward the end of this chapter. A suite of Pascal programs for testing purposes is in directory qparser\pasprogs. Each of these has been compiled with the pascal5 compiler, assembled, and linked. They also execute cleanly and perform according to their requirements.

The Lexical Analyzer


The Pascal grammar file contains the line
lexfile= ..\lib\pascal.lex

This informs the lexical analyzer that we wish Pascal token conventions to be used for any source language. Pascal identifiers are the same as C identifiers. Pascal numbers -- fixed and floating-point -- are essentially those of C, except that C provides forms for octal and hex numbers which aren't in the Pascal report. Pascal strings use the single quote mark ( ' ) to open and close a string. All identifiers and reserved words are case-insensitive, so the lexical analyzer upshifts every identifier and reserved word. This compiler supports a 32-bit signed integer, and a 32-bit IEEE floating-point number, as the runtime Pascal types integer and real. The lexical analyzer and code evaluators will carry integers as 32-bit longs, and floats as 64-bit doubles. They are only truncated when runtime code is generated. Pascal comments are enclosed in braces, for example:
{ this is a comment }

Our compiler will generate reasonably well optimized symbolic assembly for any Pentium, in protected mode, using 32-bit registers and address offsets. The arithmetic and logical operations for integers, reals and Booleans are reasonably well optimized. Extensions to other architectures or to additional number types can be made from the source code supplied, albeit with some effort.

Discussion of the Grammar


Jensen and Wirth Pascal is described in the book User Manual and Report [1] in the form of syntax diagrams and production rules. Unfortunately, the production rule set given there is ambiguous, and cannot be used in an LR parser without modification. The syntax diagrams are not hard to convert to production rules, so we've chosen to do that instead. We'd prefer to have our grammar express the Pascal language exactly, but that isn't possible. It's more important that the grammar be unambiguous, so that its LR parser can be trusted. Our grammar will cover the Pascal language rather closely, but it also contains some syntactic constructs that are not permitted by Pascal. Most of these illegal constructs are forbidden through Pascal's type rules. For example, two strings can be concatenated through the + operator. However, the grammar permits strings Appendix 4: A Pascal Grammar, page 511

to be combined with any of the Pascal operators, for example


'one string' OR 'another string'

is legal in the grammar, but it makes no sense. Such illegal forms are checked after the sequence has been parsed. Similarly, the write function permits parameters of the form
a*b+c:w:p

which means "print the value of a*b+c in a field w characters wide, with precision p". Such a parameter has no meaning in ordinary user-defined functions. However, our grammar permits it, for the sake of simplicity and to avoid making reserved words out of the names write, writeln, read, readln. It's noteworthy that these special parameter forms are not in the J&W syntax diagrams, although every compiler supports them! We'll discuss these cases as they arise during our discussion of the grammar.

The Lexical Rules


The Pascal Report diagrams for identifier, unsigned integer, and unsigned number are given below.
identifier letter

letter
digit

unsigned integer digit

unsigned number

unsigned integer

. .

digit

E
+

unsigned integer

These do not require any production rules in the grammar. They are recognized by the Qparser scanner as the lexical tokens Identifier, Integer, and Real. The lexical analyzer also directly recognizes a Pascal string, as the lexical token String. We need to make an important point about the Identifier token. This of course has the syntactic form of a letter followed (optionally) by one or more letters and digits. Our lexical analyzer could be designed to look up every identifier in the compiler's symbol table in order to distinguish the various identifier types. It would then be possible to assign each such type a separate lexical code, using these in the grammar so that many type errors would appear as syntax errors. It turns out that this is a very difficult strategy in practice. The type rules of even a simple language are sufficiently complicated that the grammar would have to be much larger. Instead, we adopt the philosophy that all identifiers are the same as far as the grammar is concerned; we merely use Identifier wherever one is required, without regard to its type. It turns out that we can get away with this; we can design an LR grammar that nicely covers all the Pascal operations. We can also perform type checks on the operations later, by using the symbol table in the LR reduce operations. Thus, we'll use the simple Identifier wherever the Report requires such objects as type identifier, or constant identifier.

Appendix 4: A Pascal Grammar, page 512

Numbers and Constants


An unsigned constant is an unsigned number or string. Thus this may be an Identifier, an Integer, a Real, or a String in our grammar. Here's the diagram:

unsigned constant constant identifier unsigned number NIL ' character '

And here are the corresponding production rules:


UnsignedConstant : Real #UCREAL | Integer #UCINTEGER | String #UCSTRING | TRUE #BOOLEAN | FALSE #BOOLEANF | NIL #VPROCNIL

We've left out identifier because it will appear elsewhere in an equivalent manner. TRUE and FALSE are added as reserved words. A constant is an optionally signed number, signed identifier, or a string. Here's the diagram:

constant + constant identifier unsigned number

'

character

'

The loop on character and the two quote marks become a String in the production rules, which are:
IntConst : | | | | | | Identifier #INTCONST0 - Identifier #INTCONST1 + Identifier #INTCONST2 Integer #INTCONST3 - Integer #INTCONST4 + Integer #INTCONST5 String #INTCONST6

Although the Report calls for a constant identifier, the meaning here is that a user can give a name to Appendix 4: A Pascal Grammar, page 513

a constant using the CONST declaration, like this:

Appendix 4: A Pascal Grammar, page 514

CONST Pi= 3.14129; Circle= 360;

These names (Pi, Circle) can later be used to stand for these constant values. Thus a constant needs to include such a name, and we use Identifier for that purpose.

Variable
The variable syntax includes using a simple name as the variable, array dereferencing with one or more index arguments (e.g. [15]), field dereferencing (using the "." character), and pointer dereferencing (using the "^" character). The diagram contains a loop permitting compound variables to be formed, like this:
v.a[15,16]^.b[3]

The validity of such an expression at compile time can only be determined by looking up v in the symbol table, then checking its extensions against the type found therein. Thus v.a[15,16] requires that a be associated with v as a record field, and be an ARRAY type with two dimensions. The resulting type must be a pointer to a record with a b field, an ARRAY with one dimension. Here is the syntax diagram for variable:

variable variable identifier field identifier [ expression , . field identifier ]

Here are the corresponding production rules:


Variable : | | | | Variable ^ #PNTR Variable [ BooleanList ] #VARINX Variable . Identifier #VARREC Identifier #VIDENT Identifier ( IOList ) #VFUNC

Note that a variable identifier and a field identifier are indistinguishable in the rules, until a symbol table lookup is done.

Factor
The factor diagrams describe these Pascal features:

Appendix 4: A Pascal Grammar, page 515

An unsigned constant, for example: 15E7 A variable, for example: v1.a[15] A function call, with optional parameters, for example: fcn(a, b+c) A parenthesized expression, for example: (a*c-b) The NOT operator, which operates on a factor in a recursive way, for example: not not(a=b) A literal powerset, which is contained in a matched [ ] pair. Inside the brackets, a list of expressions defines the members of the powerset. Each member must evaluate to an enumerated type or an integer. Note that the powerset members can be runtime expressions, not just constants. For example: [1, 2, 5..25, 28, a+b, fcn(a, b)] The factor diagram is given below:

factor unsigned integer variable function identifier ( expression , ( NOT [ expression .. expression expression factor ] ) )

,
The corresponding production rules are given below.
Factor : | | | | | | UnsignedConstant Variable Identifier ( IOList ) #VFUNC ( Expr ) #PAREN NOT Factor #NOTOP [ ExprList ] #SET [ ] #SETEMPTY

ExprList : SubrangeElement | ExprList "," SubrangeElement SubrangeElement : Expr #SRELEMENT1 | Expr .. Expr #SRELEMENT2

#BOOLEANLIST

Appendix 4: A Pascal Grammar, page 516

A SubrangeElement is a subrange, i.e. 15..17, or an expression. Then an ExprList is a commaseparated list of these.

Term
A term is a factor, or a sequence of factors separated by one of the operators
* / DIV MOD AND

The / operator is a floating-point division. It produces a floating-point result, i.e. the quotient expressed as a number with a possible fractional part. Operator * is a floating-point multiplication. The DIV operator is an integer division. It expects two integer operands and produces the integer result, with any fractional part truncated. Thus 5 DIV 2 yields 2. The MOD operator yields the remainder of an integer division of the two operands. Thus 5 MOD 3 yields 2. The AND operator yields the logical and of the two operands. Thus 6 AND 3 yields 2, since 6 is binary 110, 3 is binary 011. Then 110 AND 011 is 010, which is 2. Here's the term syntax diagram:

term

factor
*

DIV

MOD

AND

factor

Here are the term production rules:


Term : | | | | | Term * Unary #MPY Term / Unary #QUOTIENT Term MOD Unary #MODULO Term DIV Unary #DIVIDE Term AND Unary #ANDOP Unary

Simple Expression
A simple expression defines the syntax for a unary or unary +, also for a binary +, - and OR.

Appendix 4: A Pascal Grammar, page 517

simple expression + term term + OR

The corresponding production rules are as follows:


SimpExpr : SimpExpr + Term #ADD | SimpExpr - Term #SUB | SimpExpr OR Term #OROP | - Term #NEG | + Term #PLUS | Term

Expression
An expression is a simple expression or a pair of simple expressions separated by a comparison operator (=, <, >, <>, >=, <=) or the set membership operator IN. These operators evaluate to a Boolean true or false. The IN operator requires that the second expression be declared as a powerset. The first expression must be compatible with that powerset as a member. The other operators work with integers, reals, booleans and strings. Operators =, <, and <= also serve as powerset operators. Two powersets are equal (=) if they contain exactly the same member elements. Two powersets are not equal (<>) if some element is in one, but not in the other. If P and Q are powersets, then P < Q is true if Q contains every member found in P, and Q is larger than P. P <= Q if Q contains every member in P. Operators > and >= are not defined for powersets.

expression simple expression

<

>

<>

<=

>=

IN simple expression

The expression production rules are as follows:


Expr : SimpExpr | SimpExpr | SimpExpr | SimpExpr | SimpExpr | SimpExpr < SimpExpr #LESS > SimpExpr #GTR <= SimpExpr #LEQ >= SimpExpr #GEQ = SimpExpr #EQ

Appendix 4: A Pascal Grammar, page 518

| SimpExpr <> SimpExpr #NEQ | SimpExpr IN SimpExpr #INSET

Simple Type
A simple type is an identifier that stands for a type, or an enumerated type or a subrange. An example of an enumerated type is
(red, green, blue, yellow, violet)

which associates the identifier red with 0, green with 1, blue with 2, etc. An enumerated type is a way of associating a small, distinct set of integers with separate names. By using enumerated types instead of numbers, a programmer can avoid type errors resulting from mixed use of integers. They also make programs easier to read and understand. An example of a subrange is
22..55

This defines a subset of the set of integers, in this case, all those between 22 and 55 inclusive. The subrange type is used in array dimensions. It is also useful as a way of constraining a set of integers within an upper and lower bound. Most commercial Pascal compilers include a debugging mode in which violations of this constraint are detected and trapped at runtime. Such violations can lead to array bounds violations or other program errors. The Pascal syntax diagram for a simple type is given below:
simple type type identifier ( identifier , constant .. constant )

The equivalent production rules are as follows:


SimpleType : Identifier #SIMPTYPE | EnumType | Subrange ; EnumType : ( IdentList ) #ENUMTYPE Subrange : IntConst ".." IntConst #SUBRANGE IdentList : IdentList , Identifier #IDLIST1 | Identifier #IDLIST2

Type
A type is an abstract declaration of a simple type or a composite of types and simple types.

Appendix 4: A Pascal Grammar, page 519

type simple type ^ PACKED ARRAY FILE SET RECORD [ OF OF , simple type type simple type field list END ] OF type type identifier

A type includes an array of type, a set of type, a file of type, a record of types and a pointer to a type. A simple type is also a type. The packed attribute was designed for older computer systems in which variables were most efficiently aligned along word or double-word boundaries. In most modern machines, including the Intel 80x86 architectures, Motorola 680xx, and RISC processors, byte alignment of any variable is efficient. With the packed attribute in force, each variable in a record, file or array collection of types must be aligned on byte boundaries. Without the packed attribute, the compiler might align these units on whatever boundaries (word, double-word, quad-word) would yield the most efficient access code for the target CPU. For example, if a record contained a char followed by a real, then the compiler might insert three unused bytes after the char, so that the float would start on an address evenly divisible by four. If the char were followed by an integer of 16 bits, then only one unused byte would be inserted after the char. A file of type provides a framework for saving and retrieving a sequence of type object as a binary file. For example, a file of real would be expected to carry a sequence of real numbers in binary form. As a special case, a file of char is considered to be a text file. Some Pascal compilers recognize the text keyword, i.e. file of text, as carrying editable source lines, each line terminated by an end-of-line. The corresponding production rules for this syntax diagram are given below.
Type : | | | | | SimpleType PntrType FileType SetType ArrayType RecordType

SimpleType : Identifier #SIMPTYPE | EnumType | Subrange EnumType : ( IdentList ) #ENUMTYPE

Appendix 4: A Pascal Grammar, page 520

Subrange : IntConst ".." IntConst PntrType : ^ Type #PNTRTYPE

#SUBRANGE

FileType : FILE OF Type

#FILETYPE #SETTYPE

SetType : SET OF SimpleType

ArrayType : ARRAY [ SimpleTypeList ] OF Type #ARRAYTYPE SimpleTypeList : SimpleType | SimpleTypeList , SimpleType RecordType : RECORD FieldList END

#SIMPTLIST

#RECTYPE

Field List
The field list syntax is used in a RECORD statement, as suggested by the type syntax diagrams. Here are the corresponding production rules:
FieldList : FieldItem | FieldList ";" FieldItem #FIELDLIST

FieldItem : ConstList ":" Type #FTYPEDECL | CASE Identifier ":" Type OF ConstList ":" CaseType | CASE Identifier OF ConstList ":" CaseType #CASEF CaseType : ( FieldList )

#CASEFT

ConstList : ConstList "," IntConst #CONSTLIST | IntConst

And here is the field list syntax diagram:

Appendix 4: A Pascal Grammar, page 521

field list ; , identifier : type

CASE

identifier

type identifier

OF ;

, constant : ( field list )

The CASE structure is similar to the UNION struct in C++. It provides for a group of type declarations that are intended to share the same memory space at runtime. Notice that a RECORD consists of a set of non-CASE elements, optionally followed by a CASE. Each group is given a constant label and is enclosed in parentheses. If more than group is listed, they share the same memory space, and consequently cannot carry independent information. The groups can be distinguished by the type identifier following the CASE keyword, which must be compatible with the group labels. If an identifier: type identifier is supplied after the CASE keyword, then this identifier is supposed to indicate at runtime which of the field lists this particular record supports. No runtime compiler support is provided for CASE structures; it is the programmer's responsibility to ensure the integrity of the stored variables and the CASE identifier. Here's an example of a RECORD with a set of three CASE fields:
TYPE date= integer; status= (married, widowed, divorced, single); person= RECORD ss: integer; sex= (male, female); CASE ms: status OF married, widowed: (mdate: date); divorced: (ddate: date; firstd: Boolean); single: (indepdt: Boolean); END; {person}

In this example, the record is supposed to provide specialized information about the person that depends on their marital status. If the person is married or widowed, then a marriage date mdate is supplied. If divorced, then a divorce date ddate, and the Boolean flag firstd is supplied. Finally, if the person is single, then the Boolean flag indepdt is supplied. These three fields will overlap each other at runtime. Since the divorced field is largest in size, enough space in each record is allocated for it. However, that space may only used for the single datum, Appendix 4: A Pascal Grammar, page 522

which is a single Boolean value. The marital status of the person will be carried in the ms field in each record. This must be set correctly by any program allocating or setting the values of a person record, but can then later be referenced to determine which of the three cases are contained in this record. The record will always contain the first two elements: ss and sex.

Parameter List
A parameter list is intended to be used in a function or procedure declaration. It's supposed to follow the name of the function/procedure. It may be empty. Or it may consist of a sequence of variable, function or procedure names, possibly associated with a type identifier. For a FUNCTION, VAR or value parameter, several names may be listed (with comma separators) and associated with some type identifier. Note that a full type declaration is not supported here. The type identifier for a FUNCTION is the type returned by the function. For a PROCEDURE parameter, only one or more identifiers are expected; procedures do not return anything. A FUNCTION or PROCEDURE specified in a parameter list becomes a functional. This is the name of some function or procedure that will be called within the declared function. The attribute VAR means that the named variable will be passed by reference. It's implemented by passing a pointer to the variable. Without the VAR attribute, the variable is passed by value.

parameter list
;
(
FUNCTION VAR PROCEDURE

identifier
,

type identifier

,
identifier

Appendix 4: A Pascal Grammar, page 523

The corresponding production rules are given next:


FParms : ( FParmList ) #FPARMS

FParmList : FParm | FParmList ";" FParm FParm : | | |

#FPARMLIST1

VAR IdentList ":" Identifier #VARTYPE FUNCTION IdentList ":" Identifier #FPFUNC PROCEDURE IdentList #FPPROC IdentList ":" Identifier #FPVAL

An IdentList is a list of comma-separated identifiers:


IdentList : IdentList , Identifier | Identifier #IDLIST2 #IDLIST1

Statement
The statement syntax diagram covers all of the following features in Pascal: The labelled statement and the corresponding GOTO statement The assignment statement A procedure call The compound statement, which is BEGIN END enclosing a list of statements The IF-THEN and IF-THEN-ELSE statement The CASE statement, similar to the C++ switch statement The WHILE-DO statement and REPEAT-UNTIL statement The FOR loop statement The WITH statement. A procedure call in Pascal has the same syntax as in C, except that a call with no parameters is made without a parenthesis pair. For example, in C, one might write
proc();

In Pascal, this would be written


proc;

The WITH statement has no counterpart in C. Using WITH, the prefix of any commonly-used compound name can be given a name. It provides a shorthand for referring to long compound names, and may also provide some runtime optimization. For example,
TYPE date= RECORD mo: (jan, feb, mar, apr, may, jun, july, aug, sept, oct, nov, dec); day: 1..31; year: integer END; VAR p: ^date; BEGIN WITH p^ DO BEGIN mo:= mar; { effectively is p^.mo:= mar; } day:= 15; { effectively is p^.day:= 15; } year:= 1655; { effectively is p^.year:= 1655; } END END

Appendix 4: A Pascal Grammar, page 524

The field identifiers mo, day, year inside the BEGIN END of the WITH effectively become variable names, but are in fact associated with the dereferenced pointer p^. We call an item in a WITH list a record prefix. Obviously, a WITH list cannot contain two or more prefixes referring to the same record. It's also possible that a field name conflict may arise between two different records. It's a compiler responsibility to report such problems.

Appendix 4: A Pascal Grammar, page 525

unsigned integer

: :=

statement expression , ( expression procedure identifier )

variable function identifier procedure identifier

BEGIN

statement ;

END

IF CASE

expression expression

THEN OF

statement constant , ; :

ELSE

statement END

statement

WHILE REPEAT

expression

DO

statement UNTIL expression DOWNTO expression TO statement

statement ;

FOR

variable identifier :=

, WITH GOTO variable

expression DO

DO

statement

unsigned integer

Appendix 4: A Pascal Grammar, page 526

The corresponding production rules for the statement syntax diagrams are:
Stmt : | | | | | | | | | | | | IF Expr Then Stmt #IF_STMT IF Expr Then Stmt ELSE Stmt #IFE_STMT WHILE Expr DO Stmt #WHILE_STMT REPEAT StmtList OptSemi UNTIL Expr #REPEAT_STMT WITH WVarList DO Stmt #WITH_STMT FOR AssignStmt TO Expr DO Stmt #FOR_STMT FOR AssignStmt DOWNTO Expr DO Stmt #DFOR_STMT AssignStmt ProcCall GOTO Integer #GOTO_STMT Integer ":" Stmt #LBL_STMT CASE Expr OF CaseClauses OptSemi Otherwise #CASE_STMT Block

Then : THEN Block : BEGIN StmtList OptSemi END #BLOCK StmtList : StmtList ";" Stmt | Stmt #STMTLIST

AssignStmt : Variable ":=" Expr #ASSIGN ProcCall : Identifier ( IOList ) | Identifier #PROC_CALLN #PROC_CALL0 #CCLIST

CaseClauses : CaseClauses ";" CaseClause | CaseClause CaseClause : ConstList ":" Stmt #CCLAUSE

Otherwise : OTHERWISE StmtList OptSemi END | END OptSemi : Empty | ";"

#OTHERWISE

The reader may notice that the OTHERWISE keyword is not in the syntax diagrams. This is provided in most Pascal implementations as a way of catching a CASE value that does not belong to any of the labels. We've also made the semicolon optional in a few places that aren't covered by the syntax diagrams, in particular, just before an END in a Block and in the Otherwise clause.

Block
A block is a high-level Pascal unit consisting of a set of declarations of constants, types and variables, followed (optionally) by procedure and function declarations, which are followed by a list of statements enclosed in a BEGIN END pair. Notice that a block can appear inside a block, as the full declaration of a procedure or function. This also implies that procedure and function declarations can be nested. A block is required as a program unit, when preceded by some syntax starting with the keyword PROGRAM.

Appendix 4: A Pascal Grammar, page 527

block LABEL , ; CONST identifier = ; TYPE identifier = ; VAR identifier , ; ; block FORWARD PROCEDURE FUNCTION identifier identifier parameter list parameter list : type identifier ; : type type constant unsigned integer

BEGIN

statement ;

END

exit

The corresponding production rules are as follows. Here the syntax diagram for a block corresponds to the nonterminal Pblock:
Pblock : Decls DeclEnd PList Block #PBLOCK Decls : Empty | Decls Decl DeclEnd : Empty Decl :

#DECLLIST

#DECLEND

CONST CDeclList

Appendix 4: A Pascal Grammar, page 528

| | |

TYPE TList VAR VList LABEL LabList ";"

CDeclList : CDecl ";" | CDeclList CDecl ";" CDecl : Identifier = Expr LabList : LabList , Label | Label Label : Integer

#CDECLLIST

#CDECL #LABLIST

#ONELABEL #VARLIST

VList : VList VarItem ";" | VarItem ";"

VarItem : IdentList ":" Type #VITEM IdentList : IdentList , Identifier | Identifier #IDLIST2 TList : TList TypeDef ";" | TypeDef ";" #IDLIST1

#TYPELIST

TypeDef : Identifier = Type #TYPEID PList : PList PFDecl | Empty PFDecl : ProcDecl ";" | FuncDecl ";" ProcDecl | | | : PROCEDURE Identifier FParmStart FParms ";" FORWARD #PFEND0 PROCEDURE Identifier FParmStart ";" FORWARD #PFEND1 PROCEDURE Identifier FParmStart FParms ";" Pblock #PBEND0 PROCEDURE Identifier FParmStart ";" Pblock #PBEND1 #PLIST

FParmStart : Empty #FPARMSTART FuncDecl : FUNCTION Identifier Identifier | FUNCTION Identifier | FUNCTION Identifier Identifier | FUNCTION Identifier FParmStart FParms ":" ";" FORWARD #FFEND0 ":" Identifier ";" FORWARD #FFEND1 FParmStart FParms ":" ";" Pblock #FFENDB0 ":" Identifier ";" Pblock #FFENDB1

The keyword FORWARD can be used instead of a declaration of code through a Pblock. This permits declaring recursive functions. For example, if function A can call function B which can call function A, then if A is declared before B is declared, we need a FORWARD declaration for B preceding the A declaration. Then the B call inside of function A can be syntactically checked and generated without having yet seen the B function's code. Of course, the FORWARD declaration and the full declaration must agree with respect to the return type (if any) and the numbers and types of the formal parameters. The reader may also notice that in the production rules, a CONST, TYPE and VAR may appear in any Appendix 4: A Pascal Grammar, page 529

order. The syntax diagrams only permit the specific order CONSTTYPEVAR.

Program
A Pascal program describes a complete program. This is the "goal" syntax diagram of the language. The syntax of a program starts with the keyword PROGRAM, is followed by a program name, then a list of one or more names in parentheses, a block, then a terminating period. The names in parentheses refer to the standard input and output files, respectively, that will be implied in a read, write or reset statement in the code. By using the special names "input" and "output", the program will attempt a read operation from the console (standard input, or stdin, in Unix). A write output will be attempted to the console (standard output, or stdout, in Unix). The J&W Pascal Report does not define the consequences of using any other names in these positions, only that they must appear if a read or write is used in the program with no file designator.

program PROGRAM identifier ( identifier ,


The corresponding production rules are as follows:
Goal : Program EOF Program : ProgHead ";" ProgStart Pblock . #PROGRAM ProgStart : Empty #PROGSTART ProgHead : PROGRAM Identifier #PROGDECL0 | PROGRAM Identifier ( IdentList ) #PROGDECL1

block

Here, EOF refers to the end-of-file expected to follow a program source file. Although the PROGDECL1 production suggests that any number of identifiers may appear in the parentheses (as an IdentList), in fact, we expect to find only one of "input" or "input, output".

Unsupported Features
A few features found in the Pascal Report are not supported, as follows: The PACKED attribute is ignored. The PACKED attribute is intended to improve the packing of objects within a record structure. It was introduced to help optimize space on older architectures that were not byte-oriented. For example, the popular Digital Equipment Corporations's VAX architecture supported 36-bit words, with no byte addressing. For maximum runtime speed, a full word could be devoted to each byte of a byte array, but this of course wasted 28 bits per character. Most modern CPU architectures have efficient byte-oriented memory access. The FILE types supported are ordinary text files. No binary files are supported. This can easily be extended if the user wishes to. Functionals are not supported. The idea is that a function or procedure name might be assignable, and in particular, passed as a parameter to a function.

Appendix 4: A Pascal Grammar, page 530

The WITH statement is not supported. The grammar supports it, but its implementation is complicated.

Grammar coverage
As to the grammar, by itself it supports many Pascal forms that are in fact illegal. That is, the grammar covers Pascal, but does not exactly conform to it. This was necessary in order to obtain a grammar that is LR(1), i.e. would pass through an LR generator with no conflicts. (Actually, there is one conflict associated with the IF-THEN-ELSE statement, but it's easily repaired with a precedence rule). The covering properties of the grammar are related to these features, some of which have been discussed above: Function actual parameter lists are normally a list of expressions separated by commas. This was extended to include the forms used in the write and read statements for field width and precision, i.e.
expr : width : precision

which are now described by the IoItem nonterminal. This made it possible to eliminate the keywords write, writeln, read, and readln from the grammar, and to treat these special functions as any other procedure call. It's also difficult to design a non-ambiguous grammar if we distinguish these special read/write functions by their parameter usage. The semantics appropriate to the particular function or procedure call can easily be invoked later. Note that functions and procedures in Pascal may not have an arbitrary number of parameters, although these four IO functions may. A function call with no parameters appears to be identical to a simple variable reference. That is, a function FNC, with no formal parameters, is called like this:
k := fnc;

From the point of view of the grammar, fnc appears to be a simple variable. The distinction requires a symbol table lookup, which is not done during parsing. So these cases need to be sorted out during the semantic phase. The production rules for a procedure and function declaration appear overly complicated. They are written this way to avoid certain LR conflicts, considering that procedure and function prototypes must be supported through the FORWARD keyword, as well as actual declarations. Note that the FORWARD keyword appears at the end of the prototype, not at the beginning, which can cause LR conflicts. Also, we will need some empty productions inside these in order to invoke symbol table scope changes. Essentially, the scope level increases just after the procedure name is scanned, but before the formal parameters are scanned. It decreases at the end of the procedure declaration. Many operations require assistance from the symbol table. For example, its impossible to determine from the grammar alone whether each GOTO label has been previously declared, or whether a procedure or function call, or variable usage is valid. Its also impossible to determine from the grammar whether the first parameter of a write call is a FILE type. The validity of a pointer dereferencing, array index, etc., is also untestable through the grammar alone. All type checking must be done during a later semantic analysis phase, in which identifiers are classified through symbol table assistance. No symbol table information is required to distinguish identifiers during parsing. All of the binary and unary operations (including + - * / :=, etc.) require semantic type checking later. The grammar permits any combination of operands (integers, reals, booleans, strings, etc.)

Appendix 4: A Pascal Grammar, page 531

with any particular operator. However, only certain combinations are permitted in Pascal. For example, the * operator is legal between two numeric types, illegal between two Booleans, legal between two powersets and illegal between two strings. These fine distinctions require considerable compiler semantic work to sort out.

Bibliography
[1] Kathleen Jensen and Niklaus Wirth, Pascal User Manual and Report, second edition, SpringerVerlag, New York, 1978

Appendix 4: A Pascal Grammar, page 532

Appendix 5: Unix Tools


W. A. Barrett, San Jose State University napp5.doc, vs. 2

Using Unix
This is merely an introduction to the riches of the Unix shell commands and utilities. The capabilities are far richer than you will find under Dos, Windows 3.1 or Windows 95. Unix was designed for system program developers, who must deal with complicated program systems spanning many different directories and containing hundreds of files. Nothing in the MSDOS/Windows environment is comparable. To learn more, you should: 1) use the man pages built into Unix, or 2) buy one or more Unix tutorial and reference books. The man pages are excellent if you already are familiar with the Unix tool names. Its easy to use. Just type man tool, where tool is a Unix command, utility or C library function. Itll find documentation on it for you and display it in your current window. Under Linux, try info. This is like a read-only editor framework, but the navigation commands need a bit of practise. Under Sun Solaris, try opening the AnswerBook system, by choosing the right-most menu, then the AnswerBook icon. This is an HTML help facility that provides details on all the Unix shell commands, C library functions, the windowing environment, Solaris and the SPARC platform. (AnswerBook will not help you with the Gnu tools; use info for that). Regarding Unix books, there are many choices. Check the shelves of the library or a local book store. The OReilly & Associates Nutshell handbooks are excellent and reasonably priced.

Files and Directories


Unix files are organized hierarchically, as in DOS. There are important differences, however: Unix uses a forward slash (/) as a directory-file separator in path names. There is no concept of a drive (i.e. C:) in Unix. File and directory names are case-dependent. That is, the file Fred is different from FRED and fred. File and directory names can be longer than DOS names. Most Unix systems support up to 32 characters in a name. As in DOS, theres a current directory. This is the one thats understood by default when your path name doesnt start with / - its effectively prefixed to any path name that doesnt start with /. The required DOS suffix (e.g. .EXE, .TXT, .COM, etc.) can be used in Unix, but have no special meaning. The period character is just considered part of the file name; you can use several periods in a Unix file name. You can also use digits, underbar, dash, plus and certain other characters. Which characters are legal depends on the shell interpreter. Unix figures out which are executable files by looking at a files attributes - these are read, write or execute. The attributes also include whos permitted to exercise the permission - owner, group or world.

Appendix 5: Unix Primer, page 533

Directory / is called the root directory. All of the files accessible to you are found under the root directory. Youll notice that many of the files are owned by root, which is a Unix designator for the system administrator. In general, you can't change most of the root files or directories, some directories can't be entered, and some files can't be read. Your home directory should be the current directory when you log in. It will usually reside on the file server, and can be accessed from any workstation. If your account is c152t15, then your home directory will probably be /export/home1/c152t15. (The choice of home1, home2, home3 is up to the system administrator, and may change in the course of the semester). It has the symbolic name $HOME or ~. Some of the directories under / are on the workstations private disk. Others are accessed through the Ethernet links to a large disk on a separate workstation called the disk server. This is all arranged by the system administrator and is out of your control. You can tell something about this arrangement by running df. This describes the physical disk configuration, and tells you how its associated with the path names under root. Some special directories of interest to you are as follows: /export/home1, /export/home2, /export/home3: These are where the student accounts are located. These are all marked for general read-access, and the programs can be executed, but are write-protected - you cant change them or delete them. They reside on the common Sun file server, and can be accessed through NFS from any Sun workstation. /export/home: contains faculty and graduate student accounts. Some of these are marked for read access by anyone, for example, /export/home/wbarrett/ce152 contains files used in this course. /usr/local/bin: contains most of the general-purpose software installed by the system administrator, including the Gnu tool binaries, and more. /bin: Unix utilities for general use. /sbin: Unix system utilities. /usr/bin, /usr/ucb/bin: User-contributed Unix programs. (ucb refers to University of California, Berkeley). /usr/include: the include files for most of the Unix standard library functions. /usr/lib: general library files for the Unix and Gnu compilers. The Gnu compilers in fact draw on libraries that have an extended path under /usr/lib.

File and Directory Access


Here are some basic access means for files and directories: Type the path name of the current directory: Ans: pwd How do I change the current directory? Ans: cd newdirectory How do I make the current directory my home directory? Ans: cd Whats in the current directory? Ans: ls or ls -l When was a file last modified? Ans: ls -l reports the last modification date and time. How large is the file? Ans: ls -l reports the file size in bytes. For a directory, the size is always 1024; it doesnt include the size of the files in the directory. How can I create a new directory? Ans: mkdir dirname. This creates a new directory in your current directory. How can I delete a directory? Ans: rmdir dirname. You wont be allowed to do this if the directory contains any files or subdirectories, or if you dont have write permission.

Appendix 5: Unix Primer, page 534

How can I delete a file? Ans: rm filename. You wont be allowed to do this unless you have write permission. Caution: Theres no undelete. Once you remove a file, its gone forever. How can I delete a bunch of files? Ans: Use the wild card characters * or ?. In a file name, ? stands for any one character. * stands for any sequence of characters, including none. How can I delete a directory and all the files under it? Ans: rm -r directoryname. (The -r means do this recursively, descending into all subdirectories). What different about Unix regarding the wildcard characters? Ans: In Unix, the shell expands while card characters (*, ?). In DOS, the application program expands them. So they will always be expanded in Unix, while in DOS, the application program can choose to put a different meaning on them. How can I determine a files type? Ans: run ls -l filename. Theres a group of letters on the left, for example drwxr-xr-x. If x appears in this, the file is executable. Otherwise, its considered an ordinary file. Theres no way to determine if a file is readable as a text file or not, except by trying a text editor on it. Its best to follow Unix file conventions in naming your files. What are the Unix file conventions? Ans: The list continues to grow, but here are some important ones you should remember and use: C source files: suffix .c C++ source files: suffix .cpp C and C++ include files: suffix .h Object files: suffix .o Library files: suffix .a Executable files: usually no suffix, but marked x in an ls -l listing. Make file: no suffix, but usually named makefile or Makefile Other text files: no convention. I use .t or .txt. Some text files have no suffix. How can I direct the output of one program into another one? Ans: use the pipe symbol (vertical bar | ) between the two. Here's an example:
prog1 | prog2

prog1 is executed. It supposedly generates a character stream that normally is sent to stdout (your console screen). However, the pipe symbol causes the stream to pass into prog2, which is expected to accept it and process it, as its stdin character stream. How do I direct the output of a program to a file? Ans: use the > operator. Here's an example:
prog1 > myfile

Program prog1 is executed, and normally generates a character stream sent to its stdout, which is your console screen. The > operator sends it to the file myfile instead. The contents of a file can be directed to the input of a program like this:
prog1 < myfile

The following won't work. The shell will try to execute the first name in each line as a program, and that's not what you want here. Also the > operator means "send to the following file":
myfile > prog1 # this won't work as expected

File Ownership
Files and directories have owners and can be protected. You will generally be the owner of the files under your home directory, for example, all the files in /export/home1/c152t15. You can then mark all of your files read-write permission for yourself, but no access for others. If you have read access to a file, you can bring it up in an editor and read it, or access it through cat or vi or some other Appendix 5: Unix Primer, page 535

Unix tool. If you have write access to a file, you can change its contents with an editor, compiler, linker or whatever. If you have execute access, the file is usually a shell script or a binary program file, and you can execute it. You cant change the ownership of your files and directories; only the system administrator can do that. Any file or directory that you create will carry the ownership of your home directory. You can change their protection rights, using chmod. Heres how chmod works: Look at the protection codes you get from ls -l, i.e. drwxr-xr-x. Ignore the d. Consider the 9 remaining tags as bits of a binary number. For example, rwxr-xr-x is like 111101101 - the dash is a 0 and a letter is a 1. Then write that number in octal, i.e. 755. (7 = 111, 5= 101, 5= 101) Call chmod with the desired number, for example chmod 600 filename to change the permissions of the file to
rw-------

(That will make the file inaccessible to anyone but yourself). chmod can be called with a wild card like * to make all the files in the current directory with that permission. You can only use chmod on a file that you own.

Using Shell Scripts


You can embed a sequence of commands in a text file using any text editor (vi, emacs, etc.) then execute your script as though it were a program. There's a lot to be said about shell scripts; you should read a Unix book to get all the details. But here are some tips: Use the # sign to open a comment. It can be in the leftmost column of a line, or anywhere following a command. The first line of your shell script should tell Unix which program to call to interpret it. You can have your script interpreted by any program at all, including one that you've written. For the Bourne shell, or sh, use this line:
#!/bin/sh

For the C shell, or csh, use this line:


#!/bin/csh

For the Korn shell, or ksh, use this line:


#!/bin/ksh

Remember that this has to be the first line in your script. These shells have different syntax forms for shell control structures, such as if-then, while, for, case, etc. These control structures resemble the ones found in C, but only superficially. Unix tokens (and the shell operators) must be separated by spaces or tabs. Running them together will cause the shell interpreter to fail. For example, this is incorrect:
if[-f myfile] ; then ...

You need to write it with spaces separating its parts:


if [ -f myfile ] ; then ...

Think of a shell command as a Unix program command followed by parameters. The parameters must be separated by spaces. So if is in fact a shell program, and it has the parameters [, -f, myfile, ]. Line feeds are important! A line feed indicates the end of one command and the beginning of the next. The semicolon (;) in Bourne and C shell can usually be used instead of a line feed.

Appendix 5: Unix Primer, page 536

What looks like a C control statement is considered several separate components in shell programming. But the shell keeps track of what's there for the sake of nesting. So, if is considered one command, then is another command, else yet another one, and fi yet another. Note that an if requires a matching fi later on. If a parameter has to contain a space or tab, you need to quote it with " or ' quote marks. Either one can contain line feeds, so you can (for example), quote a whole section of text, including line feeds. Of course, these have to match up properly. You can debug a script (after a fashion) by running as a parameter through the appropriate shell. For example, if your script is called myscript, then you can see the running details this way:
sh -x myscript

The -x generates a trace from which you can figure out what's going on. Use the echo command to print stuff to stdout. echo expects one or more parameters, and prints them. You can quote a string and echo it out. Shell parameters can be embedded in double quotes (") and will be expanded. If embedded in single quotes ('), they will not be expanded. The man pages for sh, csh, ksh are voluminous. Try the AnswerBook instead; it's better formatted and you can move around easier. Use shell variables to carry strings. And note that every shell variable is carried as a string, including numbers. If you need to do shell arithmetic, look at the special shell function expr. It performs arithmetic evaluation on shell variables and constants, after a fashion. If you need to test variables or expressions, read the man or Answerbook page for test. In general, any shell function or program is expected to return an arithmetic value. (This is the same value returned from an int main function in C when it exits). You can do a test-and-branch against this with if. You can view the return value with the parameter $? in Bourne shell. (csh and ksh have different ways of capturing the return value). Consider writing a makefile instead. We'll explain how make works later on. The difference between a shell and make is that the shell operates like a program, while make operates from a set of expert-system style rules. make is especially good at organizing a batch of compile, link and execute operations on files.

Finding a String Occurrence in a File: grep


The Unix tool grep is probably the best friend a programmer could have. Despite its peculiar name, this finds strings in one or more text files. (Don't use it on a binary file). In modifying or developing software within a complicated system of source files scattered around among several directories, you will quickly discover that you have a major problem: where is the definition of some class or function? In other words, what does this function do, and what do its parameters mean? where are all the calls of a particular function? In other words, what source files contain a call to some function? If you are fortunate enough that your entire environment was developed under, say, Microsoft Visual C++, then there are finder tools built in that can answer questions like these. Most older Unix and other code wasn't written that way. It's in the form of text files probably scattered around several different directories. grep has a lot of options, as you'll discover if you look at its man page:
man grep

but most of the time, this will do the trick:

Appendix 5: Unix Primer, page 537

grep

name

files

Here, name is some string you are searching for. If it contains blanks or tabs, you need to doublequote it. name is considered to be a regular expression, not just a simple string. In particular, the characters *, +, (, ), [, ], . have designated meanings as operators, and do not stand for themselves. If you don't want the regular expression feature, use fgrep instead. It considers name as just a string, and treats these special characters as ordinary ASCII characters. The files can be a list of files, perhaps generated through a wild card, like this:
grep myFunction *.h *.cpp

which locates all occurrences of "myFunction" in the .h and .cpp files in the current directory. When more than one file is listed, you'll get the file name on each line, followed by the text line containing "myFunction" that grep found. The strings found may be part of a larger string. So in addition to all the names "myFunction", you might get "myFunctionName" or "ThisismyFunction" reported. A useful option to fgrep is -i. This causes fgrep to ignore the case of any letters during the search. Thus the names MY, my, My and mY would all match the pattern "my". In working with the Qparser tools, the files you will be looking for will generally be in your current directory or in directory ../lib. So you might want to call fgrep like this:
fgrep name ./* ../lib/*

which searches all the files in these directories for your pattern. The use of * here is considered a shell wild card, rather than anything special to fgrep. The shell first expands these patterns into a list of file names, and these are then operated on by fgrep.

Finding a File with a Particular Name


A less common problem is finding where a find with a particular name is located. Function find does that. ("find" under MSDOS doesn't do this, it essentially behaves like fgrep instead). find has lots of options as well. Read the man page for more information. Here's a basic formula:
find /usr -name "*.cpp"

This starts in directory /usr and searches through it and all its subdirectories, looking for a match to the pattern specified by "- name", i.e. all files that end in ".cpp". The *.cpp is placed in quotes here because it's first picked up by the shell executing this command, then passed on to another command that does the pattern matching. The initial shell strips off the quote marks, but doesn't expand the *.cpp into a list of files. The find operation effectively does that. find will list all the directories and files it found with the specified suffix. This may be a long list, so you might want to pipe the output to fgrep or some other utility to further select what you are looking for.

Editors
The vi editor is opened by typing its name in a shell window. vi takes over the shell window. The shell returns when you exit from the editor. While in vi, you are editing a copy of your file in RAM memory, not the file. The file is written just before you exit, or when you execute a "write" command. vi was one of the first screen-oriented Unix editors, and was based on the primitive line-oriented editor ed. Its considered by most users as unfriendly and difficult to learn. However, it can be found in the identical form on all Unix systems, and if you intend to become a professional software engineer, you should learn enough about vi to be able to use it when you must. vi does not respond to mouse movements, except those that control its shell window. On some workstations, the arrow keys, pageup, pagedwn and other special keys may operate as expected. Test this before depending on them. Appendix 5: Unix Primer, page 538

vi operates in two modes, a command mode, and an insert mode. While in insert mode, every character you type is inserted in the text. Line endings are not automatic - you need to press the Enter key to start a fresh line. You can make simple backup corrections on the same line with control-H, the back arrow key or the Delete key - which one applies depends on your terminal emulator. You enter command mode by pressing the escape key. In command mode, the ordinary character keys are interpreted as commands. There are a large number of commands and command variations in this mode. A crib sheet is attached to this that explains most of the modes. Start with these, then work up to the more sophisticated commands: Cursor movement: Letter h moves left. Letter l moves right. Letter j moves down and letter k moves up. Scroll up and down: control-F scrolls forward in your file. control-B scrolls backward. Moving to a particular line: type : (colon) then the line number, then Enter. Finding the currrent line number: type := Enter. (colon-equal-Enter). The line number of the cursor will be reported on the bottom of the screen. Checking for balanced parentheses, braces or brackets: position the cursor on one of these, then type % in control mode. The cursor will move to the matching one. Youll get a beep if it cant find a match. This is very useful when constructing C programs, since the compiler will generate very confusing error messages if the braces arent properly matched up. Caution: vi will be confused by parentheses, braces or brackets in quoted strings, and will get the matching wrong. Delete: Letter x deletes one character. Letter D deletes from the cursor position to the end of the line. The letter-pair dd deletes the current line. Insertion: Letter i goes into insertion mode, where the insertion occurs just to the left of the cursor. After that, whatever you type is inserted in the current line. Letter a also goes into insertion mode, but just to the right of the cursor. Seeing the Line Numbers: Type :set number (Notice the colon). You should see line numbers appear on the left side of the screen. You can get rid of them by typing :set nonumber. Copying an edit portion to a temporary file: You can only copy full lines. Determine the line numbers in the range, for example 5 through 27. Then type :5,27w filename. Notice the colon, the line number range (5,27) and the w (write) command. You can use your local directory for the filename, or the /tmp directory. Copying a file into your edit session: Again, only complete lines can be copied. Position the cursor at the leftmost column of some line. The text will be inserted just above this line. Then type :r filename. Here, filename must be a text file, possibly saved by a previous w command. Deleting an edit section: You can only delete full lines. Determine the line numbers in the range, for example 5 through 27, then type :5,27d. Refreshing your screen: Sometimes your terminal emulator and vi will get out of register. This will happen if you didnt correctly specify the terminal type when you logged in, or sometimes because the terminal emulator has some bugs. Type control-L. Multiple operations: many of the letter commands can be repeated by typing a number first. For example, to delete 6 characters, type 6x in command mode. Saving your edit file: Type :w. If vi doesnt know the file name, itll tell you about that. You can type :w filename to save your edit session to some other file. Quitting vi: Type :q. If you havent first saved your session through :w, youll be warned and not allowed to quit. You can exit without saving your session by typing :q!. Escaping vi temporarily to execute a Unix command: type :! This will change your screen to a

Appendix 5: Unix Primer, page 539

Unix shell, from which you can execute commands. To return to your vi session, execute command exit. You need to remember that you are in a vi session, otherwise exit will escape from Unix, logging you off. BEWARE: you should be aware that if you edit the same file with vi in two different shells, you are editing two different copies of the same file, not the same file. If you save one, then the other, only the changes you have made to the second session are saved; the changes made in the first one are lost. Redhat Linux will warn you about trying a second editing session with vi on the same file.

The ed and ex Editor


Learning something about ed (ex) is also a good idea, since it can be used to modify text files in scripts, and its control language has found its way into many Unix tools, such as diff, sed, sccs, and more. ed commands are also invoked when you type the command character : (colon) in vi. ex is also useful in a script or make file when you need to change something in a text file. Read the man pages for ed or ex.

The emacs Editor


emacs creates a new window. It has extensive help facilities, and a large number of features. emacs is a product of the Gnu Software Foundation, and is distributed in what amounts to a free public-domain license. (There is nevertheless an extensive license agreement shipped with the Gnu products which you should read before copying any of the Gnu software). emacs has been ported to a large number of workstations and is also becoming a software standard. It has a very large number of features, and can be configured to your private style, even made to imitate vi, if thats your preference. emacs has a single mode -- insertion. Just type, and the text will appear in the window. The control functions are obtained through the use of the control or Escape key in combination with other keys. When emacs is used through telnet or a modem, the mouse and arrow keys are useless. When its used on the workstation, you can use the mouse to select menu items and position the cursor. Some commands are infrequently used, or are potentially dangerous, and require typing control-X followed by a command phrase. Most of the useful ones are simple control or Escape keys, or combinations of control-X and Escape. The easiest way to learn emacs is to go through the online tutorial, which you can enter by first entering emacs, then typing control-h t. (hold down the Ctrl key, type h, release Ctrl, type t). This will direct through some exercises in which you can learn the basic edit operations. If you need to escape from emacs, type control-X C.

Gnu Compilers
Our Gnu compilers are from the Free Software Foundation, Cambridge, MA. They are available on a wide variety of platforms, and are essentially free, provided that we accept the whole system and dont modify or redistribute the software. Our version supports C and C++ in an integrated compiler, which can be called through
gcc g++ (or)

If your source code has the suffix .c, then the compiler expects it to conform to the ANSI C standards. If it has the suffix .cpp, or .cc, then the compiler expects it to conform to C++ standards. More details can be found by starting emacs, then reading the information pages through control-H t. Youll find some hypertext leading to a description of the compiler calling options and other features. There are a very large number of options. The following sections contain a review of the most

Appendix 5: Unix Primer, page 540

commonly used options.

Using the File Suffixes


The Gnu compilers can accept a mixed list of source and object files. It will do its best to compile the source files, then link them with the object file to produce an executable. By default, the executable file will be a.out. To specify a different file, use option o followed by the desired file name, like this:
gcc f1.c f2.c f3.o f4.o o f1

which will compile f1.c and f2.c, yielding object files f1.o and f2.o. It will then link these together with f3.o and f4.o to produce the executable file f1. (This assumes that no unusual library functions are required).

Suppressing the Executable


The default action of gcc is to produce an executable. Sometimes you only want an object file. Option -c stops the compiler at the object level, so that it won't try to link its object files. For example,
gcc f1.c f2.c f3.o f4.o c

will cause f1.c and f2.c to be compiled into object files f1.o, f2.o. The other object files will be unchanged and ignored.

Specifying Include Directories


Any include file specified like this in a C or C++ source program
#include <stdio.h>

should be located in directory /usr/include. Any include file specified like this
#include my.h

by default should be in the current directory. Sometimes, youd like the compiler to search other directories for an include file quoted like this. To do that, use the run-time option I followed by an alternative include file directory name (with no space), for example,
gcc I../mylibdir f1.c f2.c f3.o f4.o o f1

You can use several of these to make the compiler search various directories for its include files.

Library Files
If the compilers link step complains about not finding certain modules, you may have to specify a library file, or a directory in which to search for a library file. A library file contains a collection of object files, organized so that only the ones you need will be pulled out and installed in your executable program. A library file will always have the form
libXXX.a

where XXX is some name. For example: libm.a, which is the standard C/Unix math library. You would expect your library files to be in directory /usr/lib. In fact, you'll find a great many .a files there. However, these are usually specific to the C/C++ compiler supplied by the system manufacturer. The Gnu library files are quite different. They are located under a rather long path starting with /usr/lib. But the Gnu compiler knows where to find its library files, so you can refer to them as described next. If you must specify a library file, use the option lXXX with gcc or g++, like this:
gcc I../mylibdir f1.c f2.c f3.o f4.o o f1 lm

which tells the linker to look for the library file libm.a. Notice that the lib and .a parts are not Appendix 5: Unix Primer, page 541

mentioned in the option, only the remaining name part m.

Library Directories
The linker can be told to look in a special library directory with the option L, which must followed by the directory name (no space). This might be followed by a l option, to specify a particular library file. For example:
gcc I../mylibdir f1.c f2.c f3.o f4.o o f1 L. L/usr/lib lm

This tells the linker to look for library files in the current directory (the .), and then in directory /usr/lib.

Making your Own Library


A library (.a file) is built through the Unix utility ar. ("archive") This has a variety of options, including some that show you what's in a particular library, and ways of extracting particular object files from a library. You can build your own library file like this:
ar ru libqp.a obj1.o obj2.o obj3.o

which constructs the library file libqp.a from the three object files listed. The ru tells it to update the existing library with the object files. It'll replace existing ones with the new ones, so you can use this over and over to update the library.

Using gdb
You may have to use the Gnu debugger gdb or xgdb to track down bugs in your program. If so, make sure the option g is in place in the gcc or g++ command line. The Gnu debugger comes in two flavors, gdb and xgdb. gdb is a line-oriented debugger. xgdb is a GUI debugger, with menus and special windows. In working with code that accepts text from input text files, processes it, then generates output text to stdout or to a file, either one can be used, but you may find gdb to be somewhat more obscure, but faster. You can also use gdb through a remote telnet session, whereas xgdb expects you to have a fast X windows internet connection.

Starting gdb; Setting Parameter Arguments


Your files should be compiled with the -g option. This writes a symbol table and source file information into the executable file so that gdb can report variables by name, and set breakpoints by source line numbers. Suppose your executable file is exfile, and it would ordinarily be called like this:
exfile p1 < p2 > p3

To execute this program under the debugger, using these parameters, do the following:
gdb exfile (gdb starts and writes a few lines to the screen) (gdb) set args p1 < p2 > p3 (gdb) r

The Gnu debugger will execute your program just as it would be from the console. If your program crashes on a segmentation violation or some other problem, the debugger will usually intercept the trap and permit you to find out just where and how the crash occurred. You may want to launch gdb and stop on the first instruction of main. Do the following to do that:
gdb exfile (gdb starts and writes a few lines to the screen) (gdb) set args p1 < p2 > p3

Appendix 5: Unix Primer, page 542

(gdb) b main (gdb) r

It should stop on the first line of main. The "b" command plants a breakpoint on the specified function name.

Using gdb Commands


The "current" location in your source files will be listed with command "l" (lower-case L). gdb will print 10 lines of source. The line about to be executed will be the 6th line. Here's a listing obtained upon entering a program with a breakpoint at main:
(gdb) l 57 } 58 59 int 60 main(int argc, const char **argv) 61 { 62 progname= argv[0]; 63 64 if (argc != 2) giveHelp("expecting a file name"); 65 const char *fname= argv[1]; 66 (gdb)

The current location (about to be executed) is line 62. You can trace all the function calls currently active through information left in the runtime stack with command "bt" (backtrace). The current function is tagged #0, the function that called it is tagged #1, etc. down through the stack. This won't work if your program failed by running out of stack space, i.e. through an infinite sequence of recursive calls. For example:
(gdb) bt #0 main (argc=2, argv=0x2ff22cd8) at lextest.cpp:72 #1 0x100001a0 in __start () (gdb)

You can start or restart the program by setting a breakpoint in "main", then "run", like this:
(gdb) b main (gdb) r

gdb will stop on the first line of your "main" function. The entry point of other functions can be set the same way, by using the function name. A member function of an object will usually require that you also name the class, like this:
(gdb) b Cobject::doit

You can also set a breakpoint by line number. Use the "l" (line) command, or a line-listing editor, to find the line number you want. For example, use vi source to view a source file in a separate window, then :set number to show all the line numbers. Set a breakpoint by line number like this:
(gdb) b 176

assuming that you want a breakpoint at line 176. If the breakpoint you want is not in the current file, try this instead, where the file name is fname.cpp:
(gdb) b fname.cpp:344

To continue from a break, type "c" (continue). This will resume execution until a breakpoint is

Appendix 5: Unix Primer, page 543

encountered, or until the program terminates or crashes:


(gdb) c

To step through the program, line-by-line, stepping into each function call, use "s" (step), like this:
(gdb) s

To step through the program, line-by-line, executing through each function call, use "n" (next), like this:
(gdb) n

To print the current value of some variable, use "p" (print) followed by the variable name. Variables named in a C++ class may require an object qualifier, i.e. object.name or object->name. In general, gdb understands most of the syntax of simple C++ variables, with indexing, record fields, etc. Use this> if it doesn't seem to find a variable while in a member function. For example,
(gdb) l 57 } 58 59 int 60 main(int argc, const char **argv) 61 { 62 progname= argv[0]; 63 64 if (argc != 2) giveHelp("expecting a file name"); 65 const char *fname= argv[1]; 66 (gdb) p argc $1 = 2 (gdb) p argv[0] $2 = 0x2ff22d38 "/home/wbarrett/mine/lextest/lextest" (gdb)

Here, we're at the beginning of a program lextest in which one parameter is passed (argc = 2). The argv parameter is an array of char arrays containing the full name of this program and its parameters. (File in-direction and out-direction doesn't count as a parameter in Unix). So, argv[0] is a C string containing the name of this program. Its address is also given in hexadecimal, 0x2ff22d38. gdb supports a large number of commands and command options. Type "help" to get a top view of the help categories, then zero in on a category that you are interested in to find more help.

Tracing from a core File


You can gdb to find the location of a failure on a crash. A core file should be created when your program crashes. So start gdb like this:
gdb exfile core

and it should take you to the location of the crash. You can view variables, trace the stack, etc., just as if a trap had occurred at that point. You can also try duplicating the crash by running under gdb. It should stop at any trap point, so that you can inspect the program environment for problems. A common problem in C++ is trying to reference an object referenced through a null pointer, which should generate a segmentation violation. For example, it's possible to call a member function through a pointer that happens to be NULL. But this will be null, and the function will crash on accessing any of its member variables. Check
p this

to see if "this" (the current object's address) makes sense. It shouldn't be 0. To exit from gdb, type "q" (quit).

Appendix 5: Unix Primer, page 544

gdb and Assembler


gdb can be used with the Gnu assembler AS to follow the instructions of an assembly program. This can be a little more tricky. In the first place, there's no single Gnu AS assembler, rather one that's configured for the particular platform you are using. After all, the instruction set and memory references for a Sparc platform are quite different from those of an Intel platform. Yet a common instruction set format is used for all of the supported Gnu platforms. We'll discuss the 80x86 platform under Linux in what follows. (These comments may not apply to other platforms) Your assembly file should be assembled with as with the special option --gstabs. (yes, that's two dashes followed by gstabs). This will get your source assembly file built into the environment for the benefit of gdb. The as assembler expects the so-called AT&T notation, not the Masm notation. This is described briefly toward the end of chapter 2. For more details, run info as from a Linux prompt. You would do best to follow the example makefiles and other material found in directory pasprogs to get started. Figuring out just how to configure an assembly file from the info directions is difficult. When using gdb in assembly, it knows that it's dealing with assembler rather than C. That's half the battle. Most of the commands work as in C, for setting breakpoints, printing variables, etc. However, assembler is far more primitive than C with regard to variable types. So gdb will not know how to print a variable, for example, except as a hexadecimal number. You need to spell out the format you require through options to the p command. There's no separate panel for viewing registers. You need to run one of these command each time you need to check register values:
info registers

or
info all-registers

The short form shows all the CPU registers and their current contents, in hex format. The long form shows all the CPU and FPU registers. The long form display is rather verbose, and really only works well with an extended Linux shell window. There are ways of defining macro commands, also of running a command on each breakpoint trap. See the built-in help facility for details. By using suitable macros, the pain of debugging with this system can be considerably reduced.

Make Files
Organizing a software project, which usually calls for compiling and linking your C/C++ files, can become very complicated. There are often a lot of parameters, and you need to refer to a variety of different include and library directories. You could write a shell script to carry these operations out, except that a shell script isn't so good at deciding what needs to be done. Under Unix, most project organization is done through a make file. Just type make and a lot of work will be done for you - provided, of course, that the controlling makefile is properly written. Makefiles are provided in the Qparser directories. We suggest reading one of them with a text editor while you read the following.

Macros and Rules


A make file is a list of macros and rules. Macros are described later. Each rule has three parts to it,

Appendix 5: Unix Primer, page 545

a target, a list of dependencies, and a list of commands. The target is usually a single file that the rule is expected to create or update. The dependencies are a list of files that the target depends on. What make does is look at the last modification time of each of the dependency files. If any one of the dependency times is later than the target modification time, then the commands will be executed. (The commands will also be run if the target doesn't exist). The commands are just like Unix shell commands, except that each one must be preceded by a tab character. One make rule looks like this:
target : dependency dependency dependency tab command tab command

blank line That is, the target file name is followed by a colon, which is followed (on the same line) by a list of the dependency file names. You can use the backslash character (\) as a line-continuation character. The command lines must follow this target:dependency line, each preceded with a tab character. You wont be able to see the tab character in an editor, since it just looks like some spaces, but it must be there (not the equivalent number of space characters). Heres an example of a make file with two rules:
thing : thing.o tab gcc -o thing thing.o -lm thing.o : thing.c tab gcc -g -c thing.c

Given a newly edited file thing.c, the second rule invokes gcc to compile thing.c and generate a fresh thing.o. (The -c option does this. Don't use -o thing.o; it won't work.) The logic followed here goes like this: The last modification time of thing.c is more recent than the last modification time of thing.o. Therefore this rule is triggered. Also, if thing.c doesn't exist, then make will look for a rule that claims to know how to build a thing.c, i.e. a rule with thing.c on the left side. When a rule is triggered, the rule's command lines are executed sequentially in a Bourne shell. After execution of the command lines, all the rules are reexamined to look for something else to do. Usually the command line operations will update the last modification time of some other file, causing one or more other rules to trigger. In this case, thing.o will have a modification time equal to or later than that of thing.c. This also prevents this rule from being executed over and over. The first rule will be triggered next, causing gcc to link thing.o with a standard library to yield the executable thing. Note that gcc can be used as a linker as well as a compiler, depending on its command lines. It in fact contains compilation and linking rules, so that usually you only need to list your C/C++ files and a target file.

The make File


A make file essentially contains a set of rules. They can be written in any order, except that: the first rule in the file is considered the default rule to build when you call make with no target on the command line. You should have this one build whatever you want by default. make macro definitions (see the next section) must precede any usage. Macro definitions are

Appendix 5: Unix Primer, page 546

usually listed first in the make file, then followed by the rules. make figures out how to execute the rules. If file A depends on B, which in turn depends on C, and C is newer than B, then file B is built, using the C:B dependency. Since B must now be newer than A, file A is built, using the A:B dependency. The default name of a make file is makefile or Makefile. If your make file is in one of these, then just typing
make

will start interpretation of the rules in makefile. Your makefile can contain rules that aren't linked to anything else. You can then invoke a particular rule by naming its target in the make call, like this:
make mytarget

This will search for a rule in which the target is mytarget, then attempt to build it.

make Macros
A make macro is a string assigned to a name. A macro looks like this in a makefile:
CC= g++

and is invoked as $(CC). Notice the equal sign (=), which stamps this as a macro, rather than a rule. The whole definition must be on a single line, but \ can be used at the end of a line to continue the definition through the next line. Given this definition, each appearance of $(CC) in the rest of the file will be expanded to g++. There are advanced features that permit a macro name to be modified when its invoked, by changing or removing its suffix, or changing the name in other ways. See the man pages for details. You can leave a macro definition out of a make file, and introduce it later. make will then look for one of these, in this order: a make command-line parameter of the form CC=g++. A parameter like this will also override an internally defined macro definition with the same name. That is, you can call make like this:
make CC=g++

Then any macro definition for CC in the makefile will be replaced by this command-line definition. a shell variable with the same name as the macro. You can define a shell variable by writing CC=gcc in a Bourne shell (or setenv CC gcc in a C shell). You can review the list of shell variables by typing printenv or env. If your program is full of syntax errors, the complaints will stream down through the shell window. You can capture these in a file for later study by redirecting make like this:
make > filename

You can also force make to stop when a screen is full by running it like this:
make | more

more is a Unix utility that eats text and sends it to the screen, but stops when 24 lines have been displayed. You continue with the space bar.

Vi Reference Guide
The next few pages provide a reference guide to the vi editor, reprinted from Sun support documents. The same commands should work on any Unix workstation.

Appendix 5: Unix Primer, page 547

Appendix 5: Unix Primer, page 548

Appendix 5: Unix Primer, page 549

Appendix 5: Unix Primer, page 550

Appendix 5: Unix Primer, page 551

Appendix 5: Unix Primer, page 552

Installing and Using the Qparser Tools


The Unix version of Qparser is available as a tar file containing all directories, source files, makefiles, etc. that you'll require. We've tested this system under HPUX, AIX, SunOS and Linux. You need the Gnu compiler tools (gcc, g++), make, tar, gzip, and gunzip installed. Don't expect the native C++ and make utilities to install these tools. 1. The system is in the tar file qp.tar.gz, about 2.5 Mbytes. You can disregard all other files, just transfer this to your Unix system, as follows. 2. Create a fresh directory, say $HOME/qparser 3. cd $HOME/qparser 4. install qp.tar.gz in the current directory. If you use ftp to transfer qp.tar.gz, make sure you've chosen binary mode. 5. gunzip qp.tar.gz 6. tar xvf qp.tar 7. Add the following PATH to your .cshrc file (or whatever shell/profile startup file supports your PATH environment variable):
$HOME/qparser/execs

8. Your startup script should resemble the following. Under Linux, this will be in .bash_profile:
# User specific environment and startup programs PATH=$PATH:$HOME/bin:$HOME/qparser/execs:. BASH_ENV=$HOME/.bashrc USERNAME="" qphelp=$HOME/qparser/qphelp.hlp cc=gcc CC=g++ UNIX=1 TERM=vt100 TDIR=$HOME/qparser/execs export USERNAME BASH_ENV PATH cc CC UNIX TERM TDIR

9. Logout and login to make these changes permanent. Do this ...


echo $PATH

...to make sure that execs is now in your path. 10. cd ~/qparser 11. make clean; make This should build cleanly with no errors, but with verbose output. Watch for compilation or link errors. 12. You may see errors on some Unix systems of the form xxx not found. What's happening here is that make first builds the Qparser tools nlr1, lr1p, opt, and lextbl. These are linked into the directory qparser/execs, which should be on your path. Some Unix systems fail to notice new files installed in a path directory and will therefore complain about their not being found. The utility rehash is called during the make process to fix this, but this may not always work. We suggest stopping the build, then fixing the file access problem in some way--you should be able to call nlr1 from any directory, for example, after it's built. Then restart make. 13. A previous installation may corrupt your new one. Make sure that nlr1, lr1p, opt and

page 553

lextbl don't exist in your paths before building this new version. 14. You may want to copy or link these executables to /usr/local/bin or some other common executable directory for the use of others. This is not done during the build, as it usually requires root permission. You should find several new executables in ~/qparser/execs, and tools such as nlr1, opt, lr1p accessible from any other directory. "make clean" gets rid of object files and other garbage, preparing for a fresh "make". You should be able to call this in any of the build directories, or the main directory. Specific instructions for building a lexical analyzer, or a parser/translator from syntax diagrams or production rules are given in other chapters, i.e. chapter 4 for lexgen, chapter 9 for syngraph, and chapters 10, 11 and 12 for LR parsing. In general, you need to observe the following guidelines: Start a new parser or translator by copying one of the directories calc, or synskels to a private directory, say myparser. This should be under directory qparser. If that's not convenient, also copy directory lib to a directory parallel to myparser. (That is, lib and qparser must have the same parent directory). A sample grammar file can be found in calc to get you started. Most of your editing will be in the grammar file, since it can carry any C++ semantics you want to associate with production rules. A lexical analyzer will also be built automatically, based on the tokens found in the grammar file. In this respect, Qparser is more friendly than lex and yacc. The beginner should work through this book, starting from the beginning. The best translators are generated with the LR parsing system, and chapters 10 and 11 will provide a good introduction to them, with several worked-out examples. You can skip chapters 7-9 if you don't intend to use recursive-descent as your base parsing system. For an advanced optimizing compiler, the reader will want to study the remaining chapters, 1315. These provide guidelines on dealing with expression optimization, functions, and control structures. The most completely worked-out compiler (Pascal to 80386 assembler) can be found in directory pascal5. The symbolic assembler generated by the pascal5 compiler can be assembled, linked and executed through a Microsoft Visual Studio or Linux/80386 shell, given an appropriate assembler. pascal5/pascal is designed for an 80386 or higher Intel platform. With no options, this compiler generates Masm-style assembler. You will need a Microsoft-compatible assembler, which is not part of the Visual Studio package. With option -G, it generates AT&T style assembler, and can be used with the Linux/80386 as assembler.

Appendix 5: Unix Primer, page 554

Appendix 6: Microsoft Tools


W. A. Barrett, San Jose State University napp6.doc

Survey
These are the Microsoft programming tools of interest to the student compiler system: Visual Studio C++ vs. 5 or 6. This provides a complete project, editing, compiling and linking environment, as well as menu-directed tools for constructing graphics user interfaces, and debugging code. Microsoft MASM macro assembler, compatible with your Visual Studio environment. This is not provided as part of the Visual Studio package, but can be obtained through the publisher by special request. Similar products are offered by Borland International and Symantec. Visual Studio is not provided as part of the software on the CDROM accompanying this book, as it is a licensed product of Microsoft. Educational users may negotiate a reduced price for this product. Although we haven't tested the most recent version (vs. 6 at the time of writing), we see no reason that it could not be used for the purposes of our compiler tools and software. In what follows, we'll assume that all three of these products (or their equivalent) are available on your system. If you want to push through a Pascal compilation to assembler, then assemble, link and execute the program, you should spend some time configuring your system by following the directions in this chapter. These apply to an Intel-platform PC system running Windows 98 or 2000. We have not tested Windows ME or Windows XT.

Visual Studio vs. 5 and 6


Visual Studio was designed to facilitate the development of Windows menu-directed programs. It provides a large number of tools intended to improve Windows software development productivity, as follows: An editing environment that can be configured to one of several popular editing styles. It's smart in the sense that it understands enough C/C++ syntactic conventions to help the user in balancing comments and braces. Menu-directed graphics tools for designing a wide variety of GUI interfaces. Tools for connecting a GUI interface to functions and variables in the associated C++ code. C++ source code generators that will initially generate a correct and executable program. The software developer merely augments it with his/her own features. A full C++ compiler, ANSI library and MFC library. Tools for designing icons and other graphical objects. A debugging environment integrated with the editor and compiler. The debugger can be asked to display the symbolic assembler generated by the compiler. The processor registers (CPU and FPU) can be displayed. Selected memory segments can be displayed. The debugger is compatible with the separate assembler tool, ml.exe, vs. 6.11. A resource manager and compiler. A resource is a textual representation of common objects used in an application, e.g. strings, icons, dialogs, menus, and the like. Appendix 6: Microsoft Tools, page 555

A project orientation. A project corresponds roughly to an intelligent make system. It determines what needs to be compiled or linked after editing changes in order to construct a working version. Version 5 (also known as Visual Studio 97) supports only protected mode.

The Qparser Tools under Visual Studio


The complete Qparser tool set can be compiled, linked and executed under Visual Studio 5.0, and (we believe) under 6.0. You can generate your own parser or translator through Visual Studio, following the guidelines given here and in other chapters. Given the assembler ml.exe, you can also assemble and debug generated assembler programs under the VCC environment. We first describe how to build the Qparser system, starting with its libraries, then its generator utilities, and finally certain of the example compiler directories.

Generating it All
Launch Visual Studio. Choose File/Open Workspace, then look for the workspace file qparser\makeall\makeall.dsw. You'll recognize it through its special icon if you use the file finder. When this is opened, you should see a list of other projects displayed. To build them all, choose Build/Batch build. This should construct all of the Qparser tools in an appropriate order, also installing new executables in qparser\execs. DO NOT choose the Build/Build makeall.exe or Build/Rebuld All menu items. These will fail. Only the Batch build menu selector works. The batch builder uses the projects in each of the Qparser directories. You can also open a project file in each of these and build them separately, however, they should be built in the order given below. We'll discuss them next.

Generating the Parser Tools


These tools must be generated before any translators can be generated. The project list in makeall will ensure that they are built in the right order: libc. This is a set of C library files needed by the rest of the tools. The result is qplib.lib. nlr1. This LR parser tool is written in C, and is used in building certain other parser tools. opt. This optimizes a table file generated by nlr1. lr1p. This is used to generate source code from skeleton files and a parser table file. lib. This is a set of C++ library files used by the remaining tools and most translators. The result is qplib.lib. lextbl. This is used to generate a lexical analyzer from a parser table file. lexfile. This is used to generate a lexical analyzer from a user-written lexical description file. syngraph. This tool requires the above to be functional. It generates source for a translator based on syntax diagrams.

Generating Sample Translators


As a form of verification step, a variety of sample translators are also built using the parsing tools, as follows: lextest contains several lexical description files and will build a lexical analyzer from any of them. It uses lexfile. cskels contains a simple expression calculator. It uses the nlr1-opt-lr1p-lextbl suite. compi contains a simple assignment-statement compiler, producing 80386 assembler. It uses the nlr1-opt-lr1p-lextbl suite. Appendix 6: Microsoft Tools, page 556

comp contains an assignment-statement compiler, using AST technology, producing 80386 assembler. pascal0 contains a complete Pascal grammar with no semantics. It uses the nlr1-opt-lr1p-lextbl suite. pascal1 contains a complete Pascal grammar with partial semantics. They use the nlr1-opt-lr1plextbl suite. pascal2, 3, 4 contain partially-developed versions of the pascal5 compiler. You will get many errors during compilation of these. pascal5 contains a complete Pascal translator with all semantics. It uses the nlr1-opt-lr1p-lextbl suite. synskels contains a simple expression calculator using a syntax diagram description. It uses syngraph and lextbl. syncomp contains a simple expression compiler using a syntax diagram description. It uses syngraph and lextbl.

A problem in building any of these through project makeall should be reported to the author.

Generating A New Translator


We'll assume that you want to construct a new LR translator of your own design. We recommend starting with directory cskels, compi, or pascal5. Copy one of those to a new directory, making sure that directory lib is parallel to the new one. That is, both your new directory and lib should have the same parent directory. When you copy all the files from (say) cskels to a new directory mytrans, Visual Studio will assume that you want to build a translator from the grammar calc.grm. Changing the grammar name is difficult to do, since it's built into numerous settings and companion files. So we recommend that you just rewrite or modify calc.grm. You can of course add any number of other source files to your project as usual. The executable file will come out as calc.exe, and you can change its name later. The default settings for Visual Studio are to build a single-tasked console application. Your application will execute from a DOS window. You can run your translator directly from Visual Studio by selecting Build/Execute (Cntrl-F5). A DOS window will then pop up and usually expect you to type in some sentence to parse. If you want to launch the program with some arguments, i.e.
calc -d2 infile.in

then choose Project/Settings/Debug, and fill in the menu entry "Program arguments" with this:
-d2 infile.in

You can also run your translator by opening a DOS window, then using cd to reach your development directory, mytrans. In that directory should be a subdirectory debug; cd to it. You should find your translator execute file calc.exe there. Then type
calc -d2 infile.in

to run it. If you want to debug your translator, you must do that from the Visual Studio, using its debugging features. There's no way to debug such code from a DOS prompt. Of course, you need to have selected the debug mode in VCC. If you've chosen a release mode, no symbolic debugging information will be provided.

How the Studio Project System Works


Visual Studio is designed to make it easy to collect a set of source C++ programs and other projects,

Appendix 6: Microsoft Tools, page 557

then compile, link and debug them. It generates source code as requested to support GUI interfaces and their message handling. It also generates numerous helper files that keep track of your progress on any particular project. In general, if your source files are in some directory cskels, then VCC will create a new directory cskels\debug that will contain object files, the executable, and other helper files. You can safely delete directory debug to save space, but then VCC will perform a full build the next time, reconstructing that directory. Directory rc contains GUI resource files. Since you will be setting up a console application, the GUI resources will be minimal; this directory is usually absent. In general, VCC is very good at constructing new directories and source files needed to start a brandnew project. It's also very good at permitting you to modify resources and edit source files, adding your own to the mix if you need to. However, you will discover that converting a console project to a dialogbased project (or some other project) will be very difficult. It's best to start over with a new project, being careful to select exactly the right environment. You can later import source files as needed for your new environment.

Managing the Qparser Environment


The Qparser translator environment requires generating some source code through our private tools, which are not built into the Visual Studio environment. For example, certain files are constructed by the Qparser tools. The construction process and subsequent compilation need to be coordinated with the VCC project management. To manage all that cleanly through the project settings, we've provided a special makefile, qpgen.mak, to each project directory. This is in the form of a Unix makefile, it turns out. (See Appendix 1 for more details about make). It can be called from within the project environment through certain project settings, using the Microsoft make utility nmake. Here's what qpgen.mak looks like as found in the cskels directory. (It's essentially the same in the others).
# Makefile for Microsoft Visual C++ environment GRM=calc allfiles: $(GRM)lex.cpp apply.h langtab.cpp apply.cpp \ pars.h table.h # Generate table file from grammar and optimize the table $(GRM).tbl: $(GRM).grm nlr1 -t -p $(GRM) > $(GRM).rpt if errorlevel 1 del $(GRM).tbl opt $(GRM) $(GRM)lex.cpp: $(GRM).tbl lextbl $(GRM) > $(GRM)lex.cpp # This needs to be copied in locally pars.h: ..\lib\pars.h copy ..\lib\pars.h . table.h: ..\lib\table.h copy ..\lib\table.h . # Language table, specific to this grammar langtab.cpp: $(GRM).tbl ..\lib\langtab.skc lr1p $(GRM) ..\lib\langtab.skc langtab.cpp

Appendix 6: Microsoft Tools, page 558

# Generate source files from skeletons # the apply functions, specific to this grammar apply.h: $(GRM).tbl ..\lib\apply.skh lr1p $(GRM) ..\lib\apply.skh apply.h apply.cpp: $(GRM).tbl ..\lib\apply.skc lr1p $(GRM) ..\lib\apply.skc apply.cpp

What this does is encapsulate the build process for the generated source files langtab.cpp, calclex.cpp, apply.h, and apply.cpp. It also copies files pars.h and table.h from directory lib. This make file script is invoked through project settings that you can access as follows. We'll assume you are in the qparser\cskels directory: make sure the current project is calc: Choose Project/Set Active Project, and make sure that calc is checked. If you check one of the others, the system will try to build that one instead. Choose Project/Settings. Look at the list under "Settings For". Open the "calc" project. It should expand into a list of files. Find the grammar file calc.grm and select it. You should now see two tabs: "General" and "Custom Build" in the screen on the right. Choose Custom Build. You'll see a line in Build Commands. This is an nmake directive containing the special make file qpgen.mak. In the Output file window will be a list of the files managed by qpgen.mak. This is how Visual Studio figures out whether to invoke the custom build operation, i.e. if one of these files is absent or too old compared to its dependencies. When you click Dependencies, a list of dependent files can be entered. Apparently, file calc.grm is assumed to be a dependency, so this can be left blank. Now when calc.grm is changed, its modification date is made current, making it newer than any of the output files. When you ask Visual Studio to rebuild an executable, it will invoke the makefile qpgen.mak, causing a rebuild of the Qparser-generated source files. You shouldn't have to change any of this if you've borrowed an existing project file. However, if you need to start a new project (perhaps dialog based, for example), then you need to enter these settings carefully in the project framework. There's no easy way to transfer them from a working project. We recommend looking at the settings for a working Qparser project, then deciding which you need to copy into your new one.

Project Dependencies
A Visual Studio project can depend on other projects. For example, the calc project in directory cskels is set up to depend on the tool lextbl and the library file qplib. These dependencies will normally appear in the file list window on the left. Now click Projects/Dependencies. A dialog will open up that shows the project dependencies. calc should appear in the upper window, and lextbl, qplib in the lower one, both checked. This means that if any file involved in lextbl or qplib was changed recently, then they need to be rebuilt before calc is rebuilt. Strictly speaking, calc also depends on projects nlr1, opt and lr1p. However, we considered those utilities very stable, so we didn't bother including them among the dependencies. If you decide to change anything in them while rebuilding a parser, they should be added to the dependency list. You do need to make sure they are on the default DOS path created when your computer boots up and reads file Appendix 6: Microsoft Tools, page 559

C:\autoexec.bat.

Installing an MSDOS Environment for Visual Studio


It's not a well-advertised fact, but it happens that you can invoke the Visual Studio compiler and linker through a DOS window under Windows 95 and Windows 98. (This is untested under Windows NT and Windows 2000, ME, XT). This makes it possible to write a single make file similar to those used in Unix to build, compile and link a translator. However, you can not then use the resulting object and executables files within Visual Studio for debugging or other purposes. Each of the sample directories contains a makefile designed to work with the VCC compiler. You need to first configure a DOS window for the 32-bit Visual Studio environment. Follow these directions, please: Make a shortcut icon for the MSDOS window. If you don't have such an icon, try right-clicking it from the Start menu. If it isn't there, look for it in the C:\WINDOWS directory. When you right-click the MSDOS icon, a menu will pop up. Choose Properties. Choose Program. You should get a menu with tabs across the top: General/Program/Font, etc. Fill in a name for this icon, i.e. "32 bit DOS". The "Working" line is probably blank. You can set this to the default directory for the window when it's opened. The "Batch file" line should also be blank. Set this to "---\qparser\execs\start.bat", where the "---" is the path leading to your Qparser directory. There should be a start.bat file in directory execs. This file will be run just before your DOS window opens. It will set up an environment for the window similar to the default one found in autoexec.bat. start.bat must be modified as explained next. You might want to check the other window options. I prefer a Screen/Window rather than a fullscreen DOS window. You can then run several DOS windows at once. Look at the qparser\execs\start.bat file, with a text editor. In particular, TDIR should contain a full path name to the qparser\execs directory. This is very important. TDIR appears in various makefiles as well as later in this script. You'll get lots of errors if it isn't assigned. For example, if you've installed Qparser in directory C:\QPARSER, then this line should read
set TDIR=C:\QPARSER\EXECS

The start.bat file invokes one of the script files vcvars16.bat or vcvars32.bat, depending on a parameter passed to it. (These are also in directory qparser\execs). The variable MSDIR is used with the normal-mode software tools, as explained in the next section. Look at vcvars32.bat next with a text editor. The two lines to examine are these: set MSDevDir=C:\Program Files\DevStudio\SharedIDE set MSVCDir=C:\Program Files\DevStudio\VC If you've installed Visual Studio in a default way in drive C:, these probably don't need to be changed. However, it's a good idea to trace these directories from a DOS prompt and make sure the target directories exist and the path names are correct. As a test, try opening the 32-bit DOS prompt window. (Did it open without any complaints?) Then cd to the cskels directory and try the following:
nmake clean nmake

A new calc.exe file should be generated through a compilation and linkage, using your 32-bit

Appendix 6: Microsoft Tools, page 560

Microsoft compiler. It'll appear in the cskels directory, not in the cskels\debug directory.

Using the Assembler ml.exe


If you want to assemble and link any of the assembly files generated by one of our student 80x86 translators, e.g. in comp, syncomp, or pascal5, you need a protected-mode assembler, i.e. ml.exe. This can be installed in the Visual Studio binary directory, i.e. in C:\Program Files\DevStudio\VC\bin. It'll then be accessible in the same way as the C compiler. Setting up Visual Studio to run (our) pascal compiler, assemble and link its output, then run the Visual Studio debugger on the result is not a trivial matter. A worked-out solution can be found in directory pasprogs, and is adaptable to any of the pascal source files in that directory. Here are some of the issues you need to aware of: VCC will use its own (hidden) makefile rules to compile/assemble/link programs. In particular, without some work on your part, it will try to compile any of the .pas programs with a Microsoft Pascal compiler. You probably don't have that, so the make process fails. The cure for this problem (and others) is a tools.ini file. One that works for these programs is in pasprogs. This file essentially provides substitute default make rules for the VCC environment, overriding the built-in ones that don't work. tools.ini also supplies certain parameters to the assembler and linker, such as the stack size (this can be quite large on a typical PC), default file name (t1.pas), parameters needed by ml.exe, and more. VCC also requires a qpgen.mak containing information similar to that in tools.ini. This will enable it to call our private pascal compiler, assemble its output, and link in a library. The library files are in directory masmlib. The Project/Settings need adjustment to link the project to the special makefile qpgen.mak. Changing the pascal source file is a bit tricky. Follow the guidelines given next for that.

Starting the Visual Studio Environment


You should be able to launch VCC in directory qparser/pasprogs. Open the project pasprogs.dsw. You should see the usual file listings, etc. A typical file list is shown above, left. Two directories are shown: masmlib and pasprogs. masmlib supplies three source files used in all of the example Pascal programs: pasmain.c, Serv.asm, service.c. You can open any of these to see what they provide. Under masmlib, we have: pasmain.c supplies a main function as a C program. It essentially initializes some IO facilities, then calls function pasMain. This is found in the pascal program generated by pascal.exe. In this case, it's the assembler file expr.asm. Serv.asm provides a large number of assembler-level support functions. Some of these call C

Appendix 6: Microsoft Tools, page 561

functions, in service.c. service.c provides some C-level support functions, called from Serv.asm or directly from the compiled program.

Under pasprogs, we have two files: expr.pas, which is the particular Pascal example file chosen for this compiler operation. expr.asm, which is the assembler equivalent of expr.pas. This was generated by the qparser/pascal5 compiler pascal.exe. The project settings (Project/Settings) associated with file expr.pas should look like the following (below). Choose the Custom Build tab as shown. Under Build commands should be the single line starting with nmake. This is a directive to the Microsoft make utility (they call it nmake). Here are its options: /f qpgen.mak specifies the make file to use. FILE=expr specifies the file to build in qpgen.mak. If you look at this make file, you'll find a corresponding FILE macro. The Output file entry specifies the file (or files) generated by this name application.

Compiling the Library


Choose Project/Set Active Project. Select the masmlib option. Choose Build/Set Active Configuration. Make sure the debug versions are selected. Choose Build/Build masmlib. If the library hasn't been built, this should do it.

Appendix 6: Microsoft Tools, page 562

Changing the Pascal Source File

The VCC configuration is set up for the Pascal source file expr.pas. You'll see the name expr in several places in the environment. To change the source file, do this: Delete the two files under pasprogs files: expr.asm, expr.pas. Right-click on the pasprogs files line. You'll get a menu of choices. Choose Add Files to Project. Then select the Pascal file you want to compile. (Pascal programs aren't shown in the list by default. Under Files of Type, find the entry All files, and the complete list will appear). For example, t1.pas. Right-click on the pasprogs files line. Choose Add Files to Project. Type in the file name t1.asm. You'll get a complaint that this may not exist (it could also be there), but override that. Under Project/Settings, click on file t1.pas. Choose the Custom Build tab. You should get a menu like the one above, except that the white windows are empty (VCC has decided that since expr.pas was deleted, this should be clear). Type this line into the Build commands window (you may have to click on the little square icon with the yellow cross):
nmake /f qpgen.mak FILE=t1

Type this line into the Output Files white window:


pasprogs.obj

Close the Project/Settings window. Check Project/Set Active Project. The pasprogs line should be checked; click on it if not. Choose Build/Set Active Configuration. Make sure the debug versions are selected. Click on Build in the top menu row, then Build pasprogs.exe. This should launch a build of program t1.exe. To be safe, you might also try Rebuild All from this menu, which should build the library and this utility. You will notice that the executable built is called pasprogs.exe. Why isn't it called t1.exe? Because Appendix 6: Microsoft Tools, page 563

changing the name of the target executable in VCC is too horrible to contemplate. (Perhaps Microsoft fixed this in a newer version). We recommend sticking with that name.

What Can Go Wrong


Most of the time, your Pascal program may contain syntax or semantics errors. Or, if you are running a special compiler of your own design, it has problems. These can be seen by viewing the assembler source file t1.asm. Our student pascal compiler will write error messages directly into this file, and they will cause assembler errors. Look for the keyword ERROR if it's a large file. Try a clean followed by a rebuild all if the compilation seems to be unsuccessful. Here's what a successful build should look like under rebuild all:
--------------------Configuration: masmlib - Win32 Debug-------------------Performing Custom Build Step Microsoft (R) Program Maintenance Utility Version 1.62.7022 Copyright (C) Microsoft Corp 1988-1997. All rights reserved. ml /Zi /c /coff Serv.asm Microsoft (R) Macro Assembler Version 6.11 Copyright (C) Microsoft Corp 1981-1993. All rights reserved. Assembling: Serv.asm Compiling... pasmain.c service.c Creating library... --------------------Configuration: pasprogs - Win32 Debug-------------------Performing Custom Build Step Microsoft (R) Program Maintenance Utility Version 1.62.7022 Copyright (C) Microsoft Corp 1988-1997. All rights reserved. pascal t1.pas > t1.asm ml /Zi /c /coff /Fl /I..\masmlib t1.asm Microsoft (R) Macro Assembler Version 6.11 Copyright (C) Microsoft Corp 1981-1993. All rights reserved. Assembling: t1.asm del pasprogs.obj File not found rename t1.obj pasprogs.obj Linking... pasprogs.exe - 0 error(s), 0 warning(s)

Executing and Debugging the Program


You can run pasprogs.exe from the debugging environment. Try opening file pasmain.c, then step into it until you reach this line:
pasMain(); /* call the assembler runtime routine */

This is where control will be passed to the assembly generated by the Pascal compiler. You can set a breakpoint on it, or step into the assembler. Breakpoints can be placed in the assembler listing, t1.asm. You cannot set breakpoints in the Pascal listing t1.pas, because VCC has no breakpoint information about this Pascal source file.

Changing Compilers
If you want to pursue this strategy with your own compiler, you need to modify qpgen.mak. Here's what that file looks like: Appendix 6: Microsoft Tools, page 564

Makefile for various test cases

FILE=t1 ASOPTS=/Zi /c /coff /Fl /I..\masmlib pasprogs.obj: $(FILE).pas pascal $(FILE).pas > $(FILE).asm ml $(ASOPTS) $(FILE).asm del pasprogs.obj rename $(FILE).obj pasprogs.obj

By changing the pascal in the line


pascal $(FILE).pas > $(FILE).asm

to the name of your new compiler, everything else should work fine. We recommend keeping the environment found in directory pasprogs, unless you enjoy spending lots of frustrating hours trying to figure out Microsoft's strategy in VCC. We also recommend running your compiler separately from a DOS prompt and examining the generated assembler files carefully before committing to a run under VCC. A small error in the compiler rapidly turns into a flood of syntax errors and other complaints under VCC.

Appendix 6: Microsoft Tools, page 565

Appendix 7: Syngraph, A Recursive Descent Parser Generator


W. A. Barrett, San Jose State University napp7.doc, vs. 2.0 Finding all the first sets in a large set of syntax diagrams on paper is not only very tedious, but prone to error. Make one mistake of commission or omission and the resulting parser will contain a serious bug that may not be detected during testing. We now describe a software tool (syngraph) that will compute these sets correctly for any syntax diagram, and will state whether the diagram set is unambiguous or not. Syngraph also generates a recursive descent parser based on the set of syntax diagrams. You can include C++ code on any edge that will be executed when that edge is traversed. Syngraph provides a lexical analyzer configured to the tokens of the grammar, lexgen, as described in chapter 4. It also provides syntax reporting and error recovery. With syngraph, its easy to move from a set of syntax diagrams to a complete language interpreter or translator based on the diagrams.

The Syngraph Grammar


Syngraph requires a text-based description of all the paths through a syntax diagram. That description will be called its grammar. We'll illustrate that through an example. Consider the set of diagrams in Figure 1. There are two syntax diagrams, one for an E nonterminal, which is also the goal, 7 and another for T. The E diagram 5 6 3 E contains four tokens, T : ( : , ). It also Start Finish 1 2 4 8 9 contains references to ( E ) the E and T diagrams. 5 The T diagram , 10 contains the tokens * / :. It refers to the E T diagram twice. In the grammars path * description, we will Start 14 11 12 Finish have to distinguish E E these two E references (E.1) (E.2) / by using the names 15 13 E.1 and E.2. : These diagrams are clearly complete, since there are no Fig. 1. A set of syntax diagrams. E is the goal diagram. boxed references to diagram names that are not in the set. The rightmost edge in each diagram is considered the exit or Finish edge. A path through either of these starts with the leftmost edge (the entry or Start edge) and moves Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 566

toward the right in general. One path in the E diagram starts from point 4 through the E box, to point 8, then 10, then comma (,), then point 5, point 2, point 4 and thence back to E. A path does not have to include Start and Finish, as we'll see.

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 567

Syngraph Path Description


We now describe these diagrams in the form used in a syngraph grammar. Here is a description of the E diagram:
E : Start ( E ) Finish | ( ":" T ) | ":" ) | E "," E | "," ":" ;

Although this looks confusing, the pattern is straightforward:


diagram_name : path | path | path ;

The diagram_name is the name of this particular diagram, for example, E. A path is a sequence of token names (inside the circles) and/or diagram names (inside the rectangles), for example:
Start ( E ) Finish

This describes the path starting at Start, then through left parenthesis, points 1, 2, and 4, then through E, then points 8 and 9, through right parenthesis, and then to Finish. We've decorated each diagram with some numbered points, but you don't have to do that. You should be able to see how a path can be described by using Start, Finish, and the names in the circles and rectangles. Each diagram description starts with the diagrams name (e.g. E) followed by a colon :. A set of paths through the diagram are listed next (path). At least one of the paths must mention the special Start and Finish nodes. See figure 1 for these markers. Use a vertical bar | to separate the path descriptions. Use a semicolon ; to end each diagrams description. (Real semicolons appearing in a path must be quoted). Each path is defined by the terminal tokens and nonterminals along the path. You can list the same path segment several times, although its best to try to describe a path or path segment just once if possible. It's important that each path segment be mentioned at least once. Also note that a path segment requires at least a starting name and an ending name. Thus the first path described above begins with Start, then includes a left parenthesis, nonterminal E, a right parenthesis, and ends with Finish:
Start ( E ) Finish

Start and Finish are reserved wordsthey must be spelled exactly this way in the grammar file. They cannot be tokens or diagram names (unless quoted). The Start node is where you begin tracing the paths; it corresponds to the entry point. The Finish node corresponds to the exit point. The left parenthesis ( and right parenthesis ) are not reserved words, so you dont need to quote them. Once the first path is described, other paths connected to it in some way can be described. For example, an upper path following left parenthesis through colon, the T nonterminal and on to Finish is described as a path like this:

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 568

( ":" T )

We dont need to write Start in front of this, since the previous path description specified an edge from Start to the left parenthesis. But we do need to mention the left parenthesis in order to specify an edge connecting it to the colon, following points 1, 2 and 3 in figure 1. The colon must be quoted since it is a reserved symbol. We also don't need to add Finish to this path, since a path from the right parenthesis to Finish is specified by the previous path. The feedback path emerging from E through points 8, 10, the comma, points 5, 2, 4 and back into E needs this specification:
E "," E

One last path (that might easily be overlooked) is from comma, points 5, 2, and 3 to the colon, described like this:
"," ":"

These five path descriptions cover all the paths in the E diagram of figure 1. Also, no path segment is mentioned more than once.

Completeness of Path Descriptions


When have all the paths been fully described? It's tempting to draw lines along each of the paths (as described) and make sure that all the edges and nodes are covered by some line. In fact, doing that may result in missing a critical linkage between two nodes. The test for completeness is whether each edge connection between any two nodes has been described in a path. If you just trace the paths in the E diagram using the following path description, all the edges and nodes seem to be covered:
Start ( E ) Finish | ( ":" T ) | ":" ) | E "," E ;

In particular, these paths appear to cover all the edges between "(", ":", E and ",". However, the edge connection from "," to ":" is missing, and there's no way that syngraph can infer this connection. It's as though the edge leading out of "," were connected to the input of E, rather than to the junction that leads to both E and ":". That's why the last rule
"," ":"

is needed to complete the path descriptions.

The T Diagram
The T syntax diagram can be described like this:
T : Start E.1 * E.2 Finish | E.1 / E.2 | E.2 ":" * | ":" / ;

This has the same form as the E description. An important difference is that there are two E nonterminal nodes in this diagram. Both refer to the E diagram, of course, but in our path descriptions, we need to distinguish them. That is done with the special dot notation E.1 and E.2. If you forget to distinguish two appearances of the same token or diagram name, then the path connections will be as

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 569

though these two blocks were laid on top of each other, dragging their edges with them. If there is more than one instance of some terminal token in the same diagram, the dot notation is also required, and the token must be quoted. For example, if it happened that two different colon tokens appeared in the same diagram, you must refer to them as
":".1 ":".2 and

Use Spaces in the Path Descriptions


This is very important: Separate each of the diagram names, tokens and other symbols with at least one space. If you run two tokens together, like this:
Start E.1*E.2 Finish |

syngraph will think that E.1*E.2 is a special kind of token. The spaces make it possible for syngraph to figure out which are tokens, which are diagram names, and so forth. Tokens are worked out using lexgen. Read on.

Tokens and the Lexical Analyzer


A lexical analyzer will be constructed automatically by syngraph from information in your grammar. The analyzer is based on the finite-state machine ideas, using lexgen, as described in chapters 3 and 4. syngraph can work out which of the names are diagram names from the syntax
E : paths ;

used for each diagram. All the other symbols found in the path descriptions are assumed to be tokens. Names of diagrams must be like C user identifiers. They are case-sensitive, start with a letter and must continue with letters, numbers or the underscore character ( _ ). You don't have to use E for a diagram name, you can use Expr or Expression instead. For example, these are all legal names:
name1 AnotherName Yet_Another_Name

Start and Finish are Reserved Words


These names are reserved for the entry and exit points of any one diagram:
Start Finish

They do not cause any action in the parser, or any lexical analyzer operations. This name is reserved for the designated start, or goal diagram:
goal

Lexical Tokens
The lexical tokens used in your grammar need to be declared in a .lex file. You can write your own lex file by following chapter 4, or one of the example files. Here's how you specify that in the syngraph grammar:
lexfile="../lib/c.lex";

The double quotes are essential, as is the terminating semicolon. What's in the double quotes is a path relative to the current directory in which you execute syngraph. The lex file provides information about the lexical terminals only. Any literal terminals in the lex file are ignored. All literal terminals are inferred from the grammar.

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 570

Suppose that your lex file contains this line:


Integer getInteger [0-9]+

which describes a decimal integer. Then the keyword Integer, when used in your grammar, will trigger recognition of the regular expression [0-9]+ during parsing. If you use one of the lex files lib/c.lex or lib/pascal.lex, then these lexical tokens are defined:
Identifier Integer Real Character String EOF EOL

Their meaning should be clear from chapter 4. For example, when Identifier appears in your grammar, the lexical analyzer will assume that you want to accept any string that starts with a letter and continues with letters and digits. By using Real in your grammar, the run-time lexical analyzer will accept a C floating-point number, for example, 17.35 (double), 17F2 (float), or 17.2E-3 (double). By using Character in your grammar, the run-time lexical analyzer will accept a C character, for example, 3, \n, etc. By using String in your grammar, the run-time lexical analyzer will accept a C string (char array), for example,
string tab\t line feed \n quote\, and so forth.

Note that \ in the second string is an embedded double-quote mark. The special token EOF stands for end-of-file. It should not appear in your grammar. It is automatically expected at the end of any sentence derived from the goal token. By using EOL (end-of-line) in your grammar, the run-time lexical analyzer will expect an end-of-line to match it with. The lexical tokens Identifier, Integer, Real, Character and String will of course generate a Ctoken object through their lexical functions. A copy of this object will be pushed into a parser stack maintained by the syngraph run-time system. An example of this will be given later.A lexfile declaration is required whether or not you use any of the lexical tokens in your grammar. Failure to include one may cause syngraph to crash. The library directory lib contains several sample lex files. Feel free to write your own, modifying one of those.

Literal Tokens
A literal token is whatever's left over (after noting the diagram names and lexical tokens) among the tokens of your path descriptions. These can be reserved words (if then while do for etc.) or special tokens such as + - * := ( ) In general, syngraph uses spaces and tabs in the grammar file as a way of finding the tokens of your language. Whatever's between two spaces is assumed to be one of these: a syngraph reserved word or character, a lexical token, defined as such in your lex file, a diagram name, or Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 571

a literal token. and here's how these are worked out by the syngraph program: The syngraph reserved words and characters are predefined. These are:
Start Finish goal : ; | # { . "

The lexical tokens are those appearing as such in the lex file. The diagram names are inferred from the syntax of the path descriptions. Anything else is assumed to be a literal token.

If you need to include one of the reserved tokens as a language token, just quote it with doublequotes. If you need a double-quote as a token, quote it as you would in C, like this:
"\""

This also applies to any token starting with one of the reserved characters, for example,
"{}"

will be treated as the token {}.

The Goal Path


We need to do one more thing to the grammar. By default, syngraph will expect the first syntax diagram to be labelled goal. (This is built into the file user.h, and can be changed if you like). We handle that by introducing one more syntax diagram, the first one, that reads like this:
goal : Start E Finish;

The E here will be the program defined by the remainder of the grammar rules. The word goal is reserved for this purpose. Don't use some other name. Heres the complete graph file fig1.grm that we've developed above:
// fig1.grm lexfile="../lib/c.lex"; goal : Start E Finish; E : Start ( E ) Finish | ( ":" T ) | ":" ) | E "," E | "," ":" ; T : Start E.1 * E.2 Finish | E.1 / E.2 | E.2 ":" * | ":" / ;

Building a Parser Program with Syngraph


After preparing a grammar file, syngraph is invoked like this:
syngraph grammar > report-file

The grammar file is expected to have the suffix .grm. If all goes well, these files will be generated: A table file (grammar.tbl), which contains information needed to later construct a lexical analyzer

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 572

and certain other source files. The name grammar is drawn from your choice of grammar name as the first parameter of syngraph. A parser file, parser.cpp. This is a C++ program that can parse the set of syntax diagrams. Its organized as a recursive descent parser. This name can be changed by a syngraph runtime parameter. A report file, report-file, containing information about the analysis of your grammar. For example, suppose the grammar is fig1.grm, and the report file fig1.rpt. Then
syngraph fig1 > fig1.rpt

will generate the files fig1.rpt, fig1.tbl, and parser.cpp.

Grammar Error Detection and Correction


Grammar fig1.grm should generate no error complaints. However, there are several possibilities of error. These will show up in the report file fig1.rpt. Some errors can be caught by syngraph, which looks for the following problems: Syntax errors. This path description system is a programming language in its own right. If you don't use the connectives ":", "|" and ";" correctly, a syntax error will be reported, which you need to repair. Incomplete or disconnected graph. Each graph must be fully connected from the Start node to the Finish node. This means that if you trace through every path from Start, you can always reach Finish. It also means that after all the paths have been traced, there are no path pieces left out of the trace. Edge first set conflict. This means that some transition from a node is ambiguous. That is, there are two or more transitions to different nodes that contain the same token. See chapter 8 for a discussion of the first sets associated with a set of syntax graphs. Empty first set on an edge. This means that theres a serious recursion problem with the grammar. Theres probably a path that loops around through one or more diagram boxes without ever reading a token. Endless recursion in some path. This means that there's a path that loops around itself without accepting even one token. If this diagram set were permitted to become a program, then the program would never terminate under certain input conditions. This test amounts to verifying that each diagram can derive at least one terminal string. If there's a first set conflict, it will be reported like this in your report file:
Token conflict between edges in T E.2 ) * , / : EOF -> Finish E.2 : -> : ** Syntax diagram is NOT valid ...The parser will not behave as expected

This was generated by erroneously writing the E description as follows:


E : Start ( E ) Finish | ( ":" T Finish | ":" Finish | E "," E | "," ":" ;

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 573

The mistake here is having the T box and the ":" box go to Finish instead of to the right parenthesis. The path descriptions are still valid in a technical sense, and describe some kind of syntax diagram, but not the one in Figure 1. However, the error message refers to the T syntax diagram description, not the E diagram. It claims there is an edge from token E.2 to a colon (:), and also an edge from E.2 to Finish. The appearance of a colon on the edge from E.2 to the colon is not a surprise, but how did a colon appear in the first set on the edge E.2 -> Finish? Where did it come from? It shouldnt be in that set. The exit edge of the T diagram contains the tokens on the exit edge of the E nonterminal in the E diagram, plus the tokens on the exit edge of the E nonterminal in the T diagram. The former set is obviously just { , ) }, and the latter is just { * / }. The comma, colon and EOF dont belong in that set. So lets examine the grammar again, carefully. We find that this path in the E description is wrong:
":" Finish

The colon token has a transition to right parenthesis, not to Finish. A similar error is in the path
( ":" T Finish

where the transition should be from T to ), not to Finish. Using the correct grammar, the report file fig1.rpt should say that the diagram is valid, which means that there are no token conflicts on any of the edges. We can be assured that, although there may be other errors in the diagrams or our diagram description, the parser generated by syngraph will faithfully follow our description of the diagrams.

first Sets in the Report File


The token first sets as discovered from the syntax diagrams are listed in the report file. These are listed for each edge in each diagram. (It can be voluminous.) Here's part of the report for the grammar of figure 1. Weve added a few comments in Italics to explain the notation. It's a good idea to check this listing against your diagram to verify that all the paths were correctly described in your path descriptions.
Diagram E this is for the E diagram Start On ( GoTo ( Finish ExitOn ) * , / : EOF the first sets on the Finish edge ( On ( GoTo E ( On : GoTo : E On ) GoTo ) E On , GoTo , ) On ) * , / : EOF on token )*,/: EOF goto Finish GoTo Finish : On ( GoTo T : On ) GoTo ) T On ) GoTo )

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 574

, On ( GoTo E , On : GoTo : Diagram T Start On ( GoTo E Finish ExitOn ) E On * GoTo * E On / GoTo / * On ( GoTo E.2 E.2 On ) GoTo Finish E.2 On : GoTo : / On ( GoTo E.2 : On * GoTo * : On / GoTo /

How the Parser is Constructed


The first sets described above are extremely useful in constructing a parser for our syntax diagrams. They are a complete set of road markers along our parsing highway. All we need to do is to construct a program that follows this road map, following these rules: 1. The parser is constructed as a set of functions, one function per syntax diagram. We will use the diagrams name as the function name. In our example, since we have three syntax diagrams labelled goal, E and T, syngraph will construct these functions
void goal(void) { } void E(void) { ... }

and
void T(void) { ... }

Actually, since we are generating C++ objects, these will be member functions of a class Cpars, so they will look like this in the generated file parser.cpp:
void Cpars::goal(void) { . . . } void Cpars::E(void) { ... }

and
void Cpars::T(void) { ... }

2. To launch the parser, we issue a call to the goal function, which of course has the name of the first syntax diagram. Each function is expected to work through its syntax diagram, checking for token codes as appropriate, calling other syntax functions, looking for and reporting syntax errors, and, eventually, returning to its caller. 3. The parser code takes the form of a collection of switch statements with goto branch statements or

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 575

if-then-else statements. Each switch statement represents one node in a syntax diagram. The switch test is on the current token. Depending on that tokens code, the token will either be scanned (terminal), or a function will be called (nonterminal). Control will then be transferred to some other switch statement. Each node will be assigned a unique label within the parser function. These labels are somewhat arbitrary, and are reused from one function to the next. There are no goto statements that reach outside the function -- that would violate C and C++ syntax rules. Heres an example of a switch statement, found in the E function:
LBL_2: switch (tokenNumber()) { case 1: // ( E(); goto LBL_3; case 6: // : tokenRead(); goto LBL_5; default: syntaxError("expecting one of: ( : "); goto LBL_ERROR; }

This describes all the possible transitions from the left parenthesis token. See the syntax diagram, figure 1. LBL_2 corresponds to point 1 in the diagram. It's clear that only two possible paths can be taken from this point: through point 2 to point 3 and the ":" token OR through point 2 to point 4 into the E diagram. By finding the first sets for this diagram, you should discover that the E box has a first set consisting of the token "(". The ( token happens to have the token code 1. The colon token : has the token code 6. In this generated program, the lexical function tokenNumber returns the token code of the current token in the input stream. Theres a case label for each of the tokens in the first set associated with the edges out of this node. In this example, if the current token is a colon (token code 6), it is scanned, through tokenRead(), and control is transferred to LBL_5, which will decide whether to enter the T diagram (point 5 in figure 1), or move on to the right parenthesis (after calling E). If the current token is a left parenthesis instead, function E is called, after which control passes to LBL_3, and this corresponds to point 8 in figure 1.

4. Sometimes syngraph will optimize the switch statement by just writing an if statement instead. This happens when there is only one transition on an edge. Another optimization will permit one switch statement to fall into the next, omitting an explicit goto statement. 5. If the current token is not in any of those covered by the switch statement, a syntax error has occurred. This is caught by the default switch case. The syntax error message is keyed to the tokens that we expected to see, which are just those that appear in the case labels of the switch statement. Syngraph will generate a friendly error warning message with the token names filled in through this function call:
syntaxError("expecting one of: : (");

After reporting an error, control is passed to LBL_ERROR, which calls an error recovery function:
LBL_ERROR: recover(6,2,3,4,5,6,7);

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 576

Error recovery is discussed in detail later.

Observations
You'll notice that each syntax function is designed like a finite-state machine, using the goto feature of C++. The states are essentially the points in the diagram between tokens and diagram names, for example, points 1, 2, 3, etc. in figure 1. We assign a program label to each such point, then develop the necessary testing and branching code to get from that state to others along the path. The first sets help out by providing signposts for the testing. Also, syngraph has broken down our path descriptions into unit steps, for example, concentrating (in any one state) only on the paths from point 1 to point 5 or to point 8. This corresponds to finding all the transitions from one state to the next in a DFSM.

The Generated Parser for the E Diagram


Here is the complete parser (parser.cpp) for the fig1.grm grammar. Its easy to see how it expresses the E syntax diagram in figure 1, given the first sets on each of the edges that syngraph worked out for us. But beware--the label numbers in this program do not correspond to the numbers in figure 1.
// parser.cpp, generated from input file ch8.grm // NOTE -- the source grammar is free of conflicts. // parser.cpp //NOTE: this is a GENERATED file -- do not modify #include "parser.h" #include "eval.h" // ================ Cparser::goal============ void Cparser::goal(void) { fcnEnter("goal"); E(); goto LBL_2; LBL_2: if (tokenNumber() == 7) { // EOF tokenRead(); goto LBL_1; } else { syntaxError("expecting: EOF "); goto LBL_ERROR; } LBL_ERROR: recover(1,7); LBL_1: fcnExit("goal"); return; } // ================ Cparser::E============ void Cparser::E(void) { fcnEnter("E"); if (tokenNumber() == 1) { // (

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 577

tokenRead(); goto LBL_2; } else { syntaxError("expecting: ( "); goto LBL_ERROR; } LBL_2: switch (tokenNumber()) { case 1: // ( E(); goto LBL_3; case 6: // : tokenRead(); goto LBL_5; default: syntaxError("expecting goto LBL_ERROR; } LBL_3: switch (tokenNumber()) { case 2: // ) tokenRead(); goto LBL_4; case 4: // , tokenRead(); goto LBL_7; default: syntaxError("expecting goto LBL_ERROR; } LBL_4: switch (tokenNumber()) { case 2: // ) case 3: // * case 4: // , case 5: // / case 6: // : case 7: // EOF goto LBL_1; default: syntaxError("expecting goto LBL_ERROR; } LBL_5: switch (tokenNumber()) { case 1: // ( T(); goto LBL_6; case 2: // ) tokenRead(); goto LBL_4; default: syntaxError("expecting goto LBL_ERROR; } LBL_6: if (tokenNumber() == 2) { // ) tokenRead(); goto LBL_4; }

one of: ( : ");

one of: ) , ");

one of: ) * , / : EOF ");

one of: ( ) ");

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 578

else { syntaxError("expecting: ) "); goto LBL_ERROR; } LBL_7: switch (tokenNumber()) { case 1: // ( E(); goto LBL_3; case 6: // : tokenRead(); goto LBL_5; default: syntaxError("expecting one of: ( : "); goto LBL_ERROR; } LBL_ERROR: recover(6,2,3,4,5,6,7); LBL_1: fcnExit("E"); return; } // ================ Cparser::T============ void Cparser::T(void) { fcnEnter("T"); E(); goto LBL_2; LBL_2: switch (tokenNumber()) { case 3: // * tokenRead(); goto LBL_3; case 5: // / tokenRead(); goto LBL_5; default: syntaxError("expecting one of: * / "); goto LBL_ERROR; } LBL_3: E(); goto LBL_4; LBL_4: switch (tokenNumber()) { case 2: // ) goto LBL_1; case 6: // : tokenRead(); goto LBL_6; default: syntaxError("expecting one of: ) : "); goto LBL_ERROR; } LBL_5: E(); goto LBL_4; LBL_6:

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 579

switch (tokenNumber()) { case 3: // * tokenRead(); goto LBL_3; case 5: // / tokenRead(); goto LBL_5; default: syntaxError("expecting one of: * / "); goto LBL_ERROR; } LBL_ERROR: recover(2,2,7); LBL_1: fcnExit("T"); return; }

Entry and Exit Trace Functions


The functions fcnEntry and fcnExit provide a simple tracing utility. fcnEntry appears at the start of each nonterminal function, and fcnExit at its end. These are defined in rlex.cpp, and just print the functions name, nicely indented to correspond to the function calling level. Tracing is suppressed by default (although these functions are called anyway). Tracing is enabled by the runtime option -d.

Why all the GOTO Statements?


The reader may be wondering why this parser is written in such a non-structured way. Those who write C and C++ code professionally usually avoid the goto statement, having dutifully been taught that such code is unstructured and impossible to maintain. (This is true in general). There are also many hard-coded numbers (the token codes) in parser.cpp, which good programming practice insists should be described in symbolic form. In fact, parser.cpp is a very straightforward representation of the syntax diagrams and their associated road signs, the first sets. It is wholly generated automatically from the diagrams and their analysis. The generation process is relatively simple and provably correct. The computer will therefore correctly work through this ugly code at runtime and do the right thing. So why should one worry about the form of the source code? Indeed, it would be much more difficult to cast this parsing operation in the form of structured code, i.e. using if-then, while-do, etc. efficiently without the use of the goto statement. Note that there are plenty of clues built into the code. The syntax diagram names are preserved as function names. The token numbers are translated into the token string equivalents and presented as C++ comments. So tracing through this code with a symbolic debugger, in case one must, is relatively easy.

Error Recovery
Error recovery for our recursive descent parser is quite simple. Whenever an error is discovered in a syntax diagram, we essentially abandon any hope of somehow getting back on track in this diagram. We instead scan tokens, looking for one that can follow this diagram. The tokens that can follow this diagram are just the first sets on the Finish node, and are given by our first set analysis. We always add the EOF token to this list to cover the case of seeing the end of file token before we see anything legal that could permit us to continue. Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 580

After finding a favorable token, we merely return from the current function. This strategy is guaranteed to allow the functions caller to continue for at least one more token without another complaint. It does not necessarily guarantee a clean recovery, since there may be more syntax trouble after that token is accepted. Little more can be done. Note that a vital part of the recursive descent parsing strategy is in calling functions. The function return addresses are pushed on the runtime stack, and are generally inaccessible to a program. In particular, its not possible to try out a return, and later abandon the return if it proves unsuccessful. That is, once we decide to return from a function, theres no changing our mind later, i.e. Sorry, I didnt mean to return. Would you please go back into the function I returned from? The function used for recovery is called recover. It accepts this first list on the Finish node as a variable-arguments list. The first number in the list is the number of tokens following it. Its in file recover.cpp. In our example above, error recovery occurs through the call
recover(6,2,3,4,5,6,7);

which reads tokens until one of the tokens 2, 3, 4, 5, 6, or 7 is seen. It then returns without scanning through that token.

Token Symbol Table


A list of all the tokens and their token numbers is given at the top of the syngraph output file, i.e. fig1.rpt. Its self-explanatory. Heres what it looks like:
Symbol 1 2 3 4 5 6 7 8 9 10 Table TOKEN TOKEN TOKEN TOKEN TOKEN TOKEN TOKEN DIAGRAM DIAGRAM DIAGRAM ( ) * , / : EOF E T goal

If you are working with a new or unfamiliar set of syntax diagrams, it's wise to go through this token list. Make sure that every token and syntax diagram name in the diagrams is so marked in this list. If it isn't, then the path descriptions don't accurately reflect the diagrams. A common error is in spelling a token or diagram name different ways, perhaps with different capitalization. Note that the token "if" is not the same as "If".

Completing the Parser using make


The program syngraph merely generates a few files. They are sufficient to construct a parser C++ source program, but more work is required to generate an executable parser program. The additional work is done through a make file. You must have the Qparser tools installed. See appendix 1 or 2 for installation instructions. Directory syncalc and lib contain the necessary extra parts needed. These must be subdirectories of a common directory, since the makefile will try to refer to lib through the file reference ../lib. 1. First compile directory lib (if it hasnt been compiled already), which is easyjust call make within directory lib. The result of constructing the lib directory will be an object library qp.lib. 2. Change to directory syncalc. Change the GRM=xxx line in makefile to refer to your grammar

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 581

name, with its full suffix, i.e.


GRM=fig1.grm

3. Write the grammar file fig1.grm. 4. Run make within directory syncalc. The result should be a functioning parser as an executable file with the same name as your grammar. Under DOS, it will be fig1.exe. Under Unix, it will be fig1. 5. Any problems? Examine the report file fig1.rpt for hints. Syntax errors and other problems are reported in it. 6. Try executing fig1. When executed, youll get a simple prompt, after which you need to enter something that corresponds to your syntax diagrams description, i.e.
> fig1 ; call your parser from a DOS/Unix prompt >> ( : ( : ) ) ; type in a sentence to parse >> control-Z or control-D ; provide an end-of-file marker

Your executable parser is very particular about end-of-file. You will have to enter a control-Z (DOS) or control-D (Unix) in order to satisfy the parsers need for an end-of-file marker. You can also just do a control-C to terminate the task, but this just terminates the task without completing the parsing.

Input File
You can also write your input sentences as a text file. Just name the text file in the parsers call line, for example:
> fig1 e1.in

The source lines will be printed using option -s, like this:
> fig1 -s e1.in

The end-of-file will be provided by the operating system upon reaching the end of the disk file.

Function Call Tracing


The parser can show you its parsing actions (function calls, token reads) through the run-time parameter -d or -D. With -d, the parser will pause every so often for you to see the results so far. With D, it will not pause, but rather continue through the complete parsing. For example, suppose we prepare the input file e1.in, like this:
(( (: (:) * (:) : / (:) ) ), (:) )

which looks pretty confusing, but in fact satisfies the syntax diagrams of figure 1. Now use the -d option like this:
> fig1 -d e1.in

The result will be something like this:


call goal : ( call E : ( read SPECIAL ( call E : ( read SPECIAL ( call E : ( read SPECIAL ( read SPECIAL : call T : ( call E : ( read SPECIAL ( read SPECIAL :

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 582

read SPECIAL ) exit E ...Enter to continue

The tracing will pause on each exit. More will appear as you press the Enter key. Each call line says that the specified function has been called, with the current token shown. Each exit line says that the specified function is about to return. Each read line says that a new token has been scanned, and is as shown. The indentation is designed to show the function calling depth.

Valid vs. Invalid Input Sentences


Note that all weve constructed is a parser. Weve provided nothing for it to do in response to parsing something, so of course, nothing comes back. However, it will complain if it finds a syntax error in your input string. Try some invalid string, and see what happens. You should get an error complaint and then a recovery action that will allow you to continue typing something else, maybe even without more error complaints!

Details of the syncalc makefile


After the utility syngraph is called, theres more work to be done to build a complete parser. Another tool, lr1p, will be used to generate more source files. This tool draws on a binary table file fig1.tbl to do its job. Recall that fig1.tbl was prepared by syngraph. lr1p also requires some skeleton files that describe the particular task at hand. (This utility is described in more detail in chapter 12). In particular, 1. lexf.cpp provides part of the lexical analyzer. A special FSM-based lexical analyzer is also built in a file called fig1lex.cpp, where the fig1 part is the name of your grammar file. fig1lex.cpp is generated through lexgen described in chapter 4. 2. table.skh provides the skeleton for table.h. This supplies a critical table of token names and some other constants for the lexical analyzer. This skeleton is also shared by the LALR system. table.skh is in directory lib. 3. langtab.skc is the skeleton for langtab.cpp, and can be found in directory lib. This provides some lexical analyzer tables. 4. There may be other .skh and .skc files expanded by lr1p. 5. Several other utility files are provided in the lib directory. These generally need no changes for an application.

Adding Semantic Operations to a syngraph Grammar


You can add semantic operations to a syngraph grammar in the form of C++ code fragments. Here's how that's done. A fully worked-out example is in directory syncomp. This happens to generate 8086 assembler in response to an arithmetic expression grammar. Another example, a calculator, is in syncalc.

Semantics go on Edges in the Diagram


Suppose you want to execute some code during the parsing of the diagram in figure 1, at point 3 in the diagram. This point is passed through in a transition between the left parenthesis token and the colon token (path 1, 2, 3). In the path description, this will show up as this path:

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 583

( ":" T

) |

Now introduce this source code between those two tokens:


{ cout << "between ( and :" << endl; }

Here is how this will look in your grammar file. We've underlined the newly inserted material:
( { cout << "between ( and :" << endl; } ":" T ) Finish

Notice that the inserted code must be enclosed in braces { . . . }. Inside the braces, you can write any sort of C++ code fragment. This code fragment will be executed just after the "(" is scanned, but before the ":" is scanned. You can use extra spaces, tabs or line returns to make this description less confusing to read, like the following. (It can become very confusing!)
( { cout << "between ( and :" << endl; } ":" T ) Finish

In general, any C++ code fragment you add to a path description is executed just before the transition is made.

Tagging a Code Fragment


It's a good idea to make up a tag for each code fragment. This will find its way into a tracing tool that can help you find bugs in your grammar or semantics. A tag is any C-style identifier, prefixed with a pound sign, #. Place it just before a C code fragment, like this:
( #REPORT { cout << "between ( and :" << endl; } ":" T ) Finish

If you don't provide a tag, one will be provided for you by syngraph. The tag names will be reported by tracing functions when you run your translator with option -d or -D, once just before executing the code fragment, and once just after. Tags must be unique. Dont use the same tag twice in a grammar.

The Parser Pushdown Stack


A pushdown stack is provided in the syncalc and syncomp file set. Its an application of the Cstack class described in chapter 0. Its element is a Csem* object, which requires a derived class that holds something. Some example derived classes are given in eval.h. In particular, each token will be pushed into this stack just after its scanned as a Ctoken* object. When you introduce the token Identifier in your grammar (assuming this is in the lex file you've chosen), the lexical analyzer will be alerted during parsing to accept an identifier token. The string value of that token will be carried in a Ctoken object, and a pointer to a copy of that object is pushed in the parse-time stack. It's your responsibility to pop the stack and eventually to delete these objects. Other lexical tokens with an associated Ctoken object class are treated the same way. For example, the following is a section of the syncomp.grm grammar. We've added comments in Italics to explain the operations.
alist : Start Identifier ":=" sexpr

The following (alistAssign) is a tag. The C++ code fragment will be executed just after sexpr has been parsed.
#alistAssign {

runStack->pop() pops the parser stack, returning a pointer to a Ctoken object. This will be the Identifier by the overall strategy of this little compiler. sexpr should be such that anything pushed on

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 584

the stack will also be popped.


Ctoken *id= (Ctoken*) runStack->pop();

Here's the extracted identifier name


const string name= id->getStringValue();

...and here's where we use a symbol table to look up the name


Csymbol *cs; if (!symtab.findSymbol(name, cs)) {

If the name isn't in the symbol table, we push it in. (This push isn't the same as the runStack push).
cs= new Csymbol; symtab.pushSymbol(name, cs); }

Generate assembler to do the assignment. The right member of the assignment is in AX.
// generate the assignment code in assembler cout << " mov " << name << ",AX\n";

Now that we're finished with the Identifier object, we need to delete it.
delete id; } ";" Finish | ";" Identifier ;

The parser stack can also be used to push other objects. It carries pointers to objects, so the object can be of any class. It's obviously important to keep track of what's been pushed, so that it will later be popped at the right time.

A Walkthrough of syncomp
The default translator found in directory syncomp implements a simple compiler for assignment statements. We wrote a recursive descent translator for assignment statements to CPU integerarithmetic code in chapter 7. We'll now show how something similar can be done using syngraph. We now have four levels of execution: (1) syngraph and its make file executes to produce a parser. (2) When the parser executes with a source file, it generates a target assembler file. The source file here will consist of a sequence of assignment statements. (3) Given an 80x86 assembler, we could use it to assemble and link the assembler file into an executable file. (4) This last executable file should execute through the assignment statements listed in the source file used in step (2). It can be confusing to keep track of which level is being discussed in what follows. In most cases, we will discuss the generation of the assembly code alongside the future execution of that code in step (4). It's particularly confusing in the framework of step (1), since the recursive-descent compiler running in step (2) only approximately follows the sequence of diagram parts described in the grammar. The beginner should concentrate on writing a simple parser first, with no semantics, and get it right through various tests of its behavior. Then a strategy should be devised for dealing with the semantics. Its best to use the syntax diagrams to develop and proofread your strategy. We'll discuss that informally as we develop it. The goal of our strategy is to generate reasonably efficient assembler code that represents the operations in the higher-level assignment-statement language. That code will be generated in unit pieces following several rules framed in the strategy. Finally, the semantic operations need to be inserted carefully into the grammar path descriptions. You can insert these incrementally, and test the results as you go. Use the tracing features to verify that Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 585

your semantics appears where you expect it to within the diagrams, and use the diagrams as the base reference. You'll get lost if you just try to read this confusing mix of path descriptions and C++ code fragments. Our strategy is based on the idea of performing a kind of postfix operation on operands, using a syntax diagram similar to that in figure 3 of chapter 7, except that we need another diagram to support multiple assignment statements. Recall that the EAX register will be used for temporary results, and the runtime pushdown stack is used to save additional temporaries. So each nonterminal (syntax diagram) is responsible for evaluating its expression and leaving the result in EAX. We can then assume that each nonterminal will do just that, which helps us make sure that will happen when working in each diagram. It's important not to get distracted by what's happening in some other diagram, and this rule provides firewalls around each of the diagrams in that sense.

Start with the Grammar


We start by writing a grammar. Here's what that looks like, with no semantics. By now, the reader should be able to see that this describes a sequence of assignment statements separated by semicolons, e.g. programs like this:
a := 15+17; b:=a-2*a

Here's the grammar, with no semantics. We've added a few comments to help understand it.
// scomp.grm: simple expression syntax diagram // Same as syncomp.grm, except with no semantics Lexfile="../lib/c.lex"; Lexterminals: Identifier, Real, Integer, EOF; goal : Start alist EOF Finish;

An alist is a sequence of assignment statements. If you draw the graph for this, you'll see that there's a feedback transition from the semicolon token back to the Identifier token. After a semicolon, you can also exit to Finish.
alist : Start Identifier ":=" sexpr ";" Finish | ";" Identifier ;

This describes a single term object or a sequence of term objects separated by + or -. It appears repetitious because of the need to describe the feedback path from the second term back to an arithmetic operator. This is essentially the simple_expression diagram, figure 3, chapter 7, except that no unary + or - is supported here.
sexpr : Start term.1 Finish | // product and division term.1 + term.2 Finish | term.1 - term.2 | term.2 + | term.2 - ; term : Start factor.1 Finish | // product and division factor.1 "*" factor.2 Finish | factor.1 / factor.2 |

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 586

factor.2 "*" | factor.2 / ;

We've chosen to support unary + or - here instead of in the sexpr diagram. This shifts its precedence a little, but for most purposes will make no difference.
factor : Start primary Finish | Start + primary | Start - primary; // unary minus

This is equivalent to the factor diagram, figure 3, chapter 7. We call it a primary instead because we've used factor in the previous diagrams. At parse time, the Identifier and Integer objects will be pushed on the parser stack just after the lexical analyzer finds them, i.e. just before Finish.
primary : Start Identifier Finish | Start Integer Finish | Start ( sexpr ) Finish;

Semantics for a primary


We're now ready to add semantics operations to these path descriptions. Let's start with the primary path first, since it seems to be the most rudimentary. As before, we'll add comments to explain the operations. All the C++ code fragments will be written into member functions of a class Cparser. Recall that each syntax diagram becomes a member function of that name, so there'll be a function Cparser::primary, Cparser::term, Cparser::factor, etc. corresponding to each of our syntax diagrams. The code that you write into the diagram path descriptions will become part of the appropriate function.
primary : Start Identifier

Our strategy with an identifier that appears here is to pop it from the runStack, then see if it's in the symbol table.
#IDENT { Ctoken *ti= (Ctoken*) runStack->pop(); const string name= ti->getStringValue(); Csymbol ts;

Here, it isn't in the symbol table. We complain that it's undefined, then define it anyway. All identifiers are supposed to be associated with 16-bit integers in the end, so we don't need any type descriptors.
if (!symtab.findSymbol(name, ts)) { cerr << name << " is undefined\n"; }

Any identifier appearing in an expression needs to be loaded into AX


cout << " mov delete ti; } Finish | Start Integer #INTEGER eax," << name << endl;.

A literal integer will also be fetched from the parser stack, then pushed (at runtime) in the runtime

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 587

stack. We need to get the integer into AX.


{ Ctoken *tr= (Ctoken*) runStack->pop(); cout << " mov eax," << tr->getStringValue() << endl; delete tr; } Finish |

This clause in the path description handles parenthesized expressions. We don't need to do anything except parse them.
Start ( sexpr ) Finish;

Semantics for a factor


Now let's look at the semantics for a factor. This is supposed to support unary operations, so we will have to pop the (runtime) stack, do a negation, then push the result. Following is the decorated path description.
factor : #Local {smType op= NOOP;} Start primary #priU {unaryOp(op);} Finish | Start + #priUPLUS {op= UPLUS;} primary | Start - #priUMINUS {op= UMINUS;} primary;

// unary minus

This illustrates another feature provided by syngraph: declaring local variables, through the #Local statement. The #Local statement must appear just after the "factor :" header. Whatever you place in the braces will appear in the factor function between the function header and the start of the parsing code, like this:
void Cparser::factor(void) {

Here's where the #Local material is inserted:


smType op= NOOP;

...and here's where the factor function normally starts


fcnEnter("factor"); switch (tokenNumber()) { case 5: // tokenRead(); goto LBL_4; case 4: // + ... etc.

It's important to work out the order in which the various paths of the diagram are executed. In particular, notice that any semantics code appearing just before Finish will be executed last. So the code fragment
{unaryOp(op);}

is executed after all of the others. Now the code fragment #priUPLUS:
Start + #priUPLUS {op= UPLUS;} primary |

will be scanned (at parse time) just after a unary + operator is seen, but before the primary is executed. We need to let prmary run before a unary + or - operation, but we also need to keep track of

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 588

which operation will be needed. So the general idea here is the following: Just after + or - is scanner, we set op to the appropriate operation, UPLUS or UMINUS. These names are defined in parser.h as enumerated types. Just before returning from this parsing function, we call unaryOp with the op value we've saved. This function is supposed to generate assembly code to carry out the unary operation at runtime. Typically, it will only respond to op == UMINUS, and will cause the runtime stacktop value to be negated.

Semantics for a term


A term expresses the syntax diagram given in figure 3, chapter 7. Here's the decorated path description for that diagram:
term :

#Local causes this to appear at the top of the generated term function. We need a local variable op to hold a particular operation temporarily.
#Local {smType op= NOOP;} Start factor.1 Finish | // product and division

Two things are happening here. The factor.1 function call will leave an integer in AX (through assembler, of course). We grab the operator here, MPY. AX needs to be saved through a push for the sake of the second factor.2 call. After the second factor call, we call binOp to generate appropriate runtime operational code.
factor.1 "*" #termMPY {op= MPY; cout << " push AX\n"; } factor.2 #termB1 {binOp(op);} Finish |

This does the same thing, except for a division. Notice that we don't need to repeat the semantics following factor.2, since it'll be there from the previous path description.
factor.1 / #termRDIV {op= RDIV; cout << " factor.2 | push AX\n"; }

These cover the same operations in the feedback path from the end of factor.2 through an operator, back in again. We don't need to repeat the "op= MPY;" semantics, since it's covered in the previous paths.
factor.2 #termB2 { binOp(op);} "*" | factor.2 #termB3 { binOp(op);} / ;

The binOp Function


Function binOp is supposed to generate assembler target code for one of the binary operators. The idea is that the left member of the operand will be in the runtime stack, while the right member will be in EAX. This is supposed to pop the stack, perform the operation and leave the result in EAX, through assembler operations, of course.
void Cparser::binOp(smType op) { int labelValue; if (op==NOOP) return; switch (op) { case ADD: cout << " pop EDX" << endl; cout << " add EAX,EDX" << endl; break;

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 589

case SUB: cout << " cout << " cout << " break; case MPY: cout << " cout << " break; case RDIV: cout << " cout << " cout << "

pop sub neg

EDX" << endl; EAX,EDX" << endl; EAX" << endl;

pop EDX" << endl; imul EDX" << endl;

mov ECX,EAX ; the divisor" << endl; pop EAX ; the dividend" << endl; cdq" << endl;

A run-time test for divide-by-zero is provided by these lines. The division instruction is skipped if by 0.
cout << " cmp ECX,0 ; test for divide by 0" << endl; labelValue= newLabel(); cout << " jz LBL_" << labelValue << endl; cout << " idiv ECX" << endl; cout << "LBL_" << labelValue << ":" << endl; break; default: cerr << "**Unknown operator " << (int) op << endl; break; } }

A Sample Run
Given the translator syncomp developed as above, let's run it on a small source file. Here's the source file, t1.in:
// // // t1.in Fodder for SYNCOMP, illustrating various assignment statements and their resolution in 8086 assembler

a123 := (5-6); b123 := 22 - a123; c123 := -a123 * b123/(5-2);

and here's what syncomp produces in response: The first few lines make it possible to assemble this into a working program
INCLUDE aservice.asm .STACK 4096 ; reserve stack space .CODE PUBLIC _pasMain _pasMain PROC NEAR32 ; 1: { t1.in ; 2: Sample for SYNCOMP, illustrating a few assignment statements ; 3: and their resolution in 8086 assembler ; 4: }

Here is where the source code actually starts


; 5: ; 6: a123 := (5-6);

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 590

mov EAX,5 push EAX mov EAX,6 pop EDX sub EAX,EDX neg EAX mov A123,EAX lea EBX,A123$S call rptStringNQ mov EAX,A123 call rptInt call rptEOL ; 7: bbb := 22 - a123; mov EAX,22 push EAX mov EAX,A123 pop EDX sub EAX,EDX neg EAX mov BBB,EAX lea EBX,BBB$S call rptStringNQ mov EAX,BBB call rptInt call rptEOL ; 8: c15 := -a123 * bbb/(5-2); mov EAX,A123 neg EAX push EAX mov EAX,BBB pop EDX imul EDX push EAX mov EAX,5 push EAX mov EAX,2 pop EDX sub EAX,EDX neg EAX mov ECX,EAX ; the divisor pop EAX ; the dividend cdq cmp ECX,0 ; test for divide by 0 jz LBL_0 idiv ECX LBL_0: mov C15,EAX lea EBX,C15$S call rptStringNQ mov EAX,C15 call rptInt call rptEOL ; 9: ; 10: ; 11: ret _pasMain ENDP

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 591

The following declare the variables appearing in this code. The string form a123$S was included for the sake of tracing the code and variable, used to report assignment values.
.DATA A123$S DB 8,'A123 = ',0 A123 DD 0 BBB$S DB 7,'BBB = ',0 BBB DD 0 C15$S DB 7,'C15 = ',0 C15 DD 0 END

Comments
The generated code is correct, but not at all optimized. Trying to optimize the code through C++ fragments is a losing task, as one will rapidly be swamped in details. A better strategy, described in chapter 14, is to generate an abstract representation of the code semantics, in the form of an abstract syntax tree (AST), then separately analyze and walk through the AST to produce optimized assembly code. This can also be done with syngraph, of course, but well discover that a bottom-up approach to parsing makes this task much clearer.

Summary
syngraph and its associated files provide a friendly way to write simple translators from a syntax diagram description. It also provides a way to manage semantic operations, i.e. generating assembly code or interpreting an expression. It also provides a high degree of safety in implementing a new language, as any ambiguities or other problems with the set of syntax diagrams is reported. The resulting parser includes a lexical analyzer formed by finding the terminal tokens in the grammar paths. It also provides a safe and accurate parsing framework, including syntax error reports and error recovery. Semantics can be written into selected edges of the syntax diagram transitions. Although the approach seems simple, and is a direct reflection of the underlying syntax diagrams, the resulting grammar file with semantics rapidly becomes convoluted and difficult to understand.

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 592

Index
abstract syntax tree, 117, 123, 124, 125, 265, 267, 269, 270, 271, 274, 275, 276, 281, 282, 285, 286, 287, 295, 298, 300, 304, 305, 308, 310, 384, 392, 393, 397, 398, 400, 401, 405, 406, 408, 412, 557, 592 actual parameter, 355, 356, 364, 365, 424, 425, 426, 493 add instruction, 14, 15, 16, 17, 18, 20, 35, 37, 40, 47, 103, 121, 122, 135, 173, 175, 180, 181, 182, 183, 188, 205, 212, 213, 222, 225, 236, 240, 241, 242, 250, 251, 253, 254, 258, 259, 260, 262, 268, 269, 282, 284, 285, 287, 288, 289, 290, 296, 297, 300, 304, 308, 314, 315, 350, 384, 389, 398, 411, 415, 416, 417, 418, 420, 421, 424, 426, 428, 429, 437, 477, 478, 479, 482, 483, 484, 485, 487, 501, 508, 557, 569, 580, 583, 584, 587, 589 addressing, 6, 19, 348, 383, 459, 460, 463, 464, 468, 469, 473, 474, 477, 505 Aho, Alfred, 55, 203, 234, 361 AIX workstations, 553 ambiguity, 10, 70, 80, 100, 101, 102, 104, 106, 113, 117, 128, 140, 183, 186, 193, 194, 211, 221, 222, 224, 225, 226, 329, 423, 471, 573 unambiguous, 101, 102, 166, 193, 566 ASCII table, 26, 80 assignment operation, 360 attributes, 127, 129, 131, 132, 134, 136, 137, 138, 142, 143, 253, 270, 271, 312, 322, 340, 415, 429, 430, 533 big endian, 323 binary tree, 131, 133, 134, 135, 387, 388, 412 Borland, 485, 555 bracket notation, 78 C++ const, 74, 76, 82, 90, 91, 92, 127, 132, 134, 138, 140, 170, 171, 176, 178, 182, 230, 231, 232, 236, 247, 254, 255, 256, 257, 258, 260, 261, 266, 275, 277, 279, 281, 282, 291, 292, 316, 321, 330, 332, 335, 336, 337, 339, 340, 341, 344, 347, 349, 350, 351, 354, 357, 371, 426, 427, 428, 429, 435, 436, 437, 438, 439, 441, 442, 443, 444, 445, 446, 447, 449, 450, 451, 454, 456, 457, 543, 544, 585, 587 object, 7, 21, 57, 73, 75, 76, 82, 83, 84, 86, 87, 88, 90, 91, 92, 100, 105, 118, 119, 129, 131, 132, 134, 136, 138, 140, 141, 142, 170, 171, 236, 237, 238, 240, 241, 245, 247, 248, 251, 254, 262, 263, 271, 273, 274, 275, 276, 279, 280, 281, 282, 285, 292, 294, 314, 315, 319, 320, 321, 328, 329, 330, 331, 332, 333, 334, 335, 337, 338, 339, 340, 341, 342, 343, 347, 350, 351, 352, 353, 354, 355, 357, 358, 359, 362, 369, 372, 393, 399, 400, 402, 403, 404, 405, 408, 409, 410, 414, 415, 416, 417, 418, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 441, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 455, 456, 486, 541, 542, 543, 544, 554, 558, 560, 571, 581, 584, 585, 586 prototype declaration, 81, 86, 105, 106, 354, 355, 416, 424, 429, 441, 445, 446 calculator, 173, 197, 250, 252, 253, 256, 556, 557, 583 call subroutine instruction, 6, 8, 12, 13, 14, 16, 17, 18, 19, 35, 39, 47, 58, 72, 73, 84, 85, 86, 87, 88, 91, 92, 95, 99, 103, 104, 108, 109, 110, 113, 118, 121, 124, 129, 131, 133, 134, 136, 141, 142, 143, 163, 167, 169, 171, 173, 174, 175, 176, 179, 180, 183, 186, 188, 194, 204, 240, 243, 244, 247, 248, 249, 253, 256, 257, 262, 263, 272, 273, 274, 276, 281, 282, 285, 287, 293, 294, 310, 312, 317, 318, 319, 337, 345, 346, 355, 356, 362, 363, 364, 365, 366, 367, 368, 369, 371, 372, 374, 377, 378, 381, 382, 391, 393, 394, 396, 397, 398, 399, 400, 402, 404, 405, 416, 419, 420, 421, 423, 424, 425, 426, 427, 428, 429, 431, 433, 434, 435, 436, 437, 445, 450, 456, 458, 462, 465, 469, 485, 486, 491, 492, 493, 494, 495, 496, 497, 498, 508, 510, 536, 537, 538, 544, 546, 547, 553, 554, 561, 562, 564, 575, 576, 581, 582, 583, 587, 589, 591

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 593

CarrayType derived type of Ctype, 331, 333, 351, 352, 359 cbw convert byte to word, 480 CenumType derived type of Ctype, 339, 340, 342 Ceval, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 285, 287, 289, 290, 291, 292, 293, 294, 340, 349, 351, 393, 397, 399, 401, 402, 404, 405, 406, 407, 408, 409, 410 Cfunction derived type of Ctype, 354, 355, 356 Character, 20, 22, 33, 72, 80, 81, 85, 89, 90, 92, 238, 239, 571 class, 73, 74, 90, 136, 236, 271, 281, 415, 419, 430, 431, 433, 436, 437, 440, 443, 446, 447, 449, 451 base class, 76, 271, 275, 282, 331, 332, 337, 340, 410, 423, 429, 430, 431, 434, 435, 436, 437, 440, 445, 451, 455 constructor, 73, 74, 136, 137, 141, 142, 211, 245, 248, 249, 255, 263, 271, 272, 273, 274, 275, 276, 280, 282, 337, 339, 399, 415, 416, 417, 420, 421, 422, 423, 424, 428, 433, 434, 435, 436, 437, 438, 439, 443, 444, 450, 452, 454 data members, 74, 76, 81, 86, 88, 245, 248, 270, 282, 294, 332, 333, 344, 354, 355, 399, 409, 415, 416, 417, 422, 424, 430, 431, 432, 433, 434, 435, 440, 441, 443, 444, 445, 456 derived class, 91, 132, 137, 236, 237, 238, 241, 244, 247, 331, 332, 333, 343, 351, 358, 402, 429, 430, 431, 432, 433, 434, 435, 436, 437, 440, 455, 456, 584 destructor, 132, 142, 263, 337, 415, 416, 417, 423, 433, 434, 435, 436, 450, 455 embedded object, 433, 434 friend, 74, 132, 328, 339, 349, 440, 537 inheritance, 310, 321, 328, 415, 429, 431, 440, 452 instantiation, 73, 416, 423, 430, 437, 450, 455 member function, 72, 73, 74, 82, 86, 90, 91, 92, 93, 133, 136, 137, 140, 169, 171, 175, 178, 236, 238, 240, 248, 249, 254, 263, 275, 281, 282, 319, 328, 337, 354, 357,

372, 415, 416, 417, 421, 423, 424, 426, 428, 429, 430, 431, 432, 433, 436, 437, 439, 440, 445, 450, 454, 455, 543, 544, 575, 587 virtual function, 86, 91, 237, 273, 328, 333, 337, 415, 436, 437, 453, 456 virtual functions, 331, 332, 337, 437, 455 classdefs, 112, 115, 216, 221, 228, 241, 242, 248, 249, 250, 252, 256, 257, 260 clause, 10, 41, 83, 101, 157, 166, 168, 269, 390, 404, 405, 407, 408, 409, 491, 495, 496, 588 cmp compare instruction, 157, 289, 290, 291, 292, 297, 383, 384, 386, 387, 388, 389, 390, 391, 392, 394, 395, 397, 399, 403, 410, 411, 477, 478, 490, 491, 495, 496, 510, 590, 591 CodeView debugger Microsoft, 499 comment, 11, 61, 62, 63, 72, 79, 80, 84, 85, 88, 114, 246, 257, 442, 536 comparison, 63, 131, 266, 278, 282, 289, 290, 291, 292, 295, 383, 386, 388, 390, 391, 392, 396, 397, 406, 477, 478, 490, 495, 500, 508, 510 conflict, 68, 70, 71, 80, 102, 186, 203, 207, 208, 221, 223, 224, 225, 226, 440, 573 connected, 13, 30, 141, 228, 269, 280, 353, 392, 445, 454, 568, 569, 573 constant folding, 265, 266, 282, 283, 284, 286, 289, 296, 302, 310, 316, 392, 505 context free, 36, 94, 101, 157, 213, 380 control character, 22, 77, 78, 79 control statement, 83, 380, 381, 382, 383, 385, 387, 390, 392, 398, 404, 412, 537 control structure, 380, 381, 386, 391, 392, 394, 398, 399, 403, 405, 411, 412, 536, 554 CrecField derived type of Ctype, 349, 350, 353, 354 CrecType derived type of Ctype, 349, 350, 353, 354 Csem, 74, 76, 236, 237, 238, 239, 240, 241, 244, 246, 247, 262, 263, 271, 281, 285, 293, 332, 333, 337, 584 CsetType derived type of Ctype, 344 Csimple

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 594

derived type of Ctype, 331, 337, 339, 351, 354, 360 Cstack, 240, 402, 449, 450, 451, 584 CstackElement, 402 Csubrange derived type of Ctype, 331, 339, 343, 359 Ctoken, 73, 74, 75, 76, 81, 82, 83, 84, 85, 86, 87, 88, 90, 91, 92, 93, 112, 115, 170, 216, 221, 237, 238, 241, 242, 248, 249, 251, 252, 256, 257, 260, 272, 273, 274, 276, 279, 281, 571, 584, 585, 587, 588 Ctree, 125, 271, 274, 275, 276, 281, 282, 310, 409, 410, 451, 452, 453, 454, 455, 456, 457 Ctype, 331, 332, 333, 334, 336, 337, 338, 339, 340, 341, 342, 343, 344, 347, 349, 350, 351, 352, 354, 355, 356, 357, 358 cwd convert word to double word, 480, 483 data access protection, 6, 17, 74, 129, 132, 136, 137, 138, 236, 244, 254, 263, 273, 281, 289, 302, 332, 334, 336, 337, 339, 340, 341, 344, 347, 349, 351, 354, 356, 357, 415, 416, 417, 428, 429, 430, 431, 432, 433, 434, 436, 437, 438, 439, 440, 441, 444, 449, 452, 453, 454, 455, 456, 459, 485, 495, 534, 536, 540, 554, 558, 561 declaration, 10, 11, 105, 106, 128, 129, 130, 131, 137, 141, 171, 174, 176, 241, 254, 255, 312, 317, 320, 321, 322, 325, 326, 327, 328, 329, 330, 333, 341, 342, 343, 345, 347, 348, 350, 351, 352, 353, 354, 355, 358, 359, 364, 371, 375, 415, 418, 422, 423, 424, 426, 427, 428, 429, 440, 442, 443, 444, 445, 470, 471, 485, 492, 505, 571 defeatGC, 263 defeatGConce, 263, 264, 272, 410 definition, 12, 20, 40, 57, 71, 73, 81, 90, 105, 106, 120, 127, 132, 166, 171, 187, 195, 214, 231, 236, 238, 241, 244, 254, 273, 318, 321, 333, 339, 414, 415, 416, 417, 424, 427, 428, 429, 430, 440, 449, 459, 537, 547 delete, 69, 132, 134, 142, 143, 262, 263, 272, 274, 275, 277, 293, 294, 295, 334, 337, 346, 355, 405, 406, 418, 421, 423, 426, 433, 435, 438, 453, 454, 455, 534, 535, 539, 558, 584, 585, 587, 588 DeRemer, Frank, 55, 56, 211, 234

derivation, 35, 36, 37, 95, 96, 98, 99, 100, 101, 106, 107, 108, 109, 110, 113, 114, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 187, 198, 199, 200, 201, 218, 251, 271, 310, 401 leftmost, 45, 81, 98, 99, 100, 101, 108, 109, 112, 118, 119, 163, 164, 172, 174, 194, 195, 199, 205, 218, 239, 240, 272, 274, 282, 290, 394, 452, 453, 454, 455, 456, 501, 536, 539, 566 rightmost, 98, 99, 100, 101, 109, 110, 113, 114, 116, 118, 172, 195, 200, 272, 290, 454, 456, 566 tree, 99, 100, 101, 108, 110, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 198, 271, 310, 401 disambiguating rule, 101, 225, 226 dynamic link, 362, 374, 375, 376 ed editor, 538, 540 emacs editor, 536, 540 empty string, 28, 36, 41, 47, 94, 320, 428 EOF, 73, 113, 187, 237, 239, 334, 403, 580 EOL, 237, 239, 334 evalControl function, 393, 394, 395, 398, 399, 404, 406, 412 evaluation, 117, 119, 120, 121, 122, 124, 167, 176, 244, 265, 269, 270, 282, 290, 309, 364, 366, 380, 386, 388, 392, 393, 403, 405, 411, 481, 495, 505, 537 ex editor, 23, 41, 95, 100, 102, 107, 118, 185, 190, 194, 310, 319, 359, 360, 423, 426, 444, 494, 540 expression, 191, 291, 293, 537, 561, 562, 563 factor, 6, 135, 161, 163, 164, 165, 166, 167, 168, 169, 171, 172, 174, 175, 177, 178, 179, 180, 181, 191, 193, 298, 501, 586, 587, 588, 589 file transfer protocol, FTP, 22, 553 finite state machine, 8, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 41, 42, 46, 47, 48, 52, 53, 54, 55, 58, 62, 63, 64, 65, 66, 67, 70, 71, 72, 74, 75, 76, 77, 82, 83, 84, 85, 88, 90, 92, 93, 126, 159, 166, 167, 168, 197, 203, 583 first set, 102, 184, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 223, 566, 573, 574, 575, 576, 577, 580

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 595

floating point, 12, 13, 15, 16, 28, 29, 59, 60, 76, 81, 113, 167, 176, 237, 238, 248, 253, 267, 298, 299, 300, 301, 302, 308, 312, 313, 314, 315, 316, 318, 319, 320, 322, 323, 324, 333, 334, 366, 377, 378, 396, 416, 459, 460, 494, 496, 500, 502, 503, 504, 505, 506, 507, 508, 509, 510, 571 foldConst function, 276, 277, 278, 279, 281, 282, 283, 287, 289, 290, 293, 294, 295, 298, 397 formal parameter, 354, 355, 356, 358, 362, 363, 364, 365, 366, 368, 370, 374, 378, 424, 425, 426, 427, 444, 445, 446, 493 free-form language, 61, 72 FSM deterministic, 31, 38, 39, 40, 45, 54, 55, 62, 67, 68, 102, 106, 166, 223 distinguishable states, 46, 47, 48, 49, 50, 51, 71 empty token, 37, 67 empty transition, 31, 32, 36, 38, 40, 41, 42, 53, 54 empty transition cycle, 40 equivalent state, 40, 46, 48, 50, 71 equivalent states, 36, 46, 55, 71, 286, 329, 343 merge, 40, 41, 42, 44, 48, 68, 70 multiple transition, 38, 39, 43, 44, 54, 68 non deterministic FSM, 39, 40, 43, 44, 46, 54, 55 nondeterministic, 34, 36, 37, 39, 54, 68 partitions, 48, 50, 51 state, 7, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 57, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 83, 84, 85, 86, 87, 88, 94, 109, 113, 114, 158, 168, 197, 200, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 218, 219, 220, 221, 222, 223, 224, 225, 228, 230, 232, 233, 234, 236, 237, 243, 246, 247, 250, 258, 357, 376, 383, 384, 396, 422, 445, 458, 470, 480, 566, 570, 577 transition, 30, 31, 32, 33, 34, 36, 37, 38, 40, 41, 44, 47, 48, 50, 52, 53, 54, 55, 65, 66,

67, 70, 72, 204, 207, 208, 209, 212, 213, 225, 573, 574, 576, 583, 584, 586 ftp, 22, 553 garbage collection, 263, 264, 274, 333 getChar, 38, 39, 73, 83, 84, 85, 86, 87 getIdent, 75, 80, 81, 86 getToken, 72, 74, 82, 83, 84, 85, 86, 90, 92, 170 Gill, A., 55 global data, 370, 463 global variable, 245, 334, 365, 374, 375, 376, 377, 379, 427, 444 Gnu, 5, 6, 58, 183, 414, 420, 468, 496, 499, 533, 534, 540, 541, 542, 545, 553 g++ compiler, 420, 540, 541, 542, 547, 553 gcc compiler, 58, 496, 499, 540, 541, 542, 546, 547, 553 gdb debugger, 414, 420, 468, 542, 543, 544, 545 xgdb debugger, 542 goal, 48, 94, 95, 96, 98, 99, 106, 107, 108, 110, 118, 171, 187, 188, 190, 191, 198, 205, 228, 254, 566, 570, 571, 572, 575, 577, 581, 582, 585, 586 grammar, 34, 35, 36, 37, 74, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 117, 118, 120, 121, 187, 191, 199, 202, 204, 209, 211, 213, 216, 218, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 235, 237, 238, 239, 240, 241, 242, 244, 246, 248, 249, 250, 252, 253, 256, 257, 258, 259, 263, 266, 271, 272, 274, 276, 279, 280, 282, 285, 290, 293, 295, 334, 335, 355, 380, 387, 405, 554, 557, 558, 559, 566, 568, 570, 571, 572, 573, 574, 577, 581, 582, 583, 584, 585, 586, 592 graph, 30, 32, 33, 39, 54, 70, 184, 185, 186, 363, 452, 455, 572, 573, 586 directed, 158, 310, 555 edge, 159, 160, 169, 184, 187, 188, 189, 190, 191, 192, 193, 194, 196, 258, 382, 566, 569, 573, 574, 576 node, 99, 118, 123, 124, 125, 132, 133, 134, 158, 168, 184, 185, 186, 187, 188, 189, 190, 193, 194, 198, 199, 269, 270, 271, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 285, 287, 290, 293, 294, 295,

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 596

305, 306, 307, 310, 388, 389, 393, 394, 396, 397, 452, 453, 454, 455, 456, 457, 568, 573, 576, 580, 581 host language, 4, 20, 229, 230, 232 Huffman, 55 identifier, 9, 10, 26, 59, 60, 62, 68, 69, 70, 71, 75, 76, 78, 82, 86, 92, 105, 112, 114, 121, 125, 126, 127, 128, 129, 130, 131, 132, 135, 137, 142, 158, 159, 160, 161, 163, 164, 165, 166, 172, 174, 176, 179, 182, 187, 191, 200, 203, 204, 215, 219, 232, 237, 240, 242, 254, 255, 257, 258, 261, 270, 271, 272, 276, 279, 280, 286, 287, 312, 317, 329, 330, 331, 334, 335, 339, 350, 354, 383, 386, 405, 446, 584, 585, 587 inherited, 125, 271, 414, 434 Intel 8086, 11, 459, 461, 465, 466, 483, 485, 489, 490, 499, 500, 583, 590 code segment, 14, 377, 411, 466, 468, 470, 471, 474, 482, 483, 485, 496 data segment, 14, 376, 377, 379, 410, 467, 468, 470, 471, 481, 483, 484, 485 registers, 14, 20, 22, 62, 167, 179, 183, 258, 265, 266, 267, 269, 270, 287, 299, 301, 304, 305, 306, 307, 308, 310, 313, 323, 356, 362, 365, 366, 367, 368, 369, 370, 374, 375, 376, 377, 379, 384, 386, 387, 391, 393, 407, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 485, 486, 487, 488, 489, 490, 491, 493, 494, 495, 496, 500, 501, 502, 504, 505, 506, 507, 508, 509, 510, 539, 545, 586 segments, 228, 231, 463, 555 Intel corporation, 13, 183, 197, 265, 273, 282, 287, 288, 290, 298, 304, 308, 309, 310, 315, 316, 323, 348, 362, 459, 461, 477, 488, 499, 500, 501, 503, 504, 505, 506, 507, 509, 510, 545, 554, 555 Intel FPU fadd, 15, 300, 301, 303, 313, 500, 505, 508, 509 fdiv, 300, 301, 302, 506, 508 fild, 15, 267, 299, 300, 301, 302, 303, 313, 506, 507, 508, 509

fld, 15, 267, 268, 299, 300, 301, 302, 303, 313, 378, 396, 397, 505, 507, 508, 509, 510 floating point unit, 15, 16, 19, 20, 267, 298, 299, 300, 301, 302, 303, 304, 308, 310, 313, 316, 369, 377, 378, 396, 460, 500, 503, 504, 505, 506, 507, 508, 509, 510, 545, 555 fmul, 267, 268, 300, 500, 506, 508 fstp, 15, 16, 299, 300, 301, 302, 303, 377, 505, 506, 507, 508, 509 fsub, 300, 301, 303, 505, 506, 508 je jump on equal instruction, 157, 292, 388, 410, 489, 491 jg jump on greater-than instruction, 292, 386, 389, 392, 396, 397, 411, 489, 495, 496, 497 jge jump on greater-than-or-equal instruction, 290, 291, 292, 297, 397, 489, 490 jl jump on less instruction, 387, 389, 391, 396, 411, 489, 490 jle jump on less-than-or-equal instruction, 292, 383, 396, 489, 510 jmp jump instruction, 157, 290, 291, 292, 297, 385, 386, 387, 388, 389, 400, 402, 403, 404, 406, 408, 411, 489, 490, 491, 495, 496, 497 jne jump on not equal instruction, 292, 384, 385, 386, 390, 391, 392, 394, 395, 396, 399, 403, 489 Johnson, W. L., 5, 55 Kleene, S. C., 55 Knuth, Donald, 200, 202, 203, 208, 234, 309, 310 language LL, 102, 109, 183, 197 LR, 102, 110, 111, 125, 198, 200, 202, 204, 208, 211, 212, 213, 214, 216, 218, 221, 225, 226, 228, 230, 231, 232, 235, 236, 246, 251, 258, 259, 270, 274, 275, 279, 280, 401, 554, 556, 557 lea load address instruction, 14, 16, 17, 18, 345, 346, 365, 367, 369, 389, 411, 468, 469, 474, 475, 498, 591 left recursion, 100, 103, 104, 118 letter ASCII, 3, 21, 25, 26, 28, 41, 60, 62, 78, 112, 126, 158, 159, 160, 238, 262, 318, 381, 470, 472, 500, 536, 539, 570, 571

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 597

lex, 57, 72, 74, 80, 81, 82, 84, 86, 88, 90, 91, 92, 93, 112, 115, 169, 170, 171, 172, 174, 175, 176, 177, 178, 180, 181, 182, 216, 221, 229, 238, 239, 241, 242, 248, 249, 250, 252, 256, 257, 260, 554, 558, 570, 571, 572, 584, 586 lexfile, 72, 74, 82, 88, 112, 115, 176, 216, 221, 228, 229, 238, 241, 242, 248, 249, 250, 252, 256, 257, 260, 556, 570, 571, 572 lexgen, 39, 67, 78, 90, 91, 168, 170, 174, 197, 229, 554, 566, 570, 583 lexical function, 71, 75, 76, 81, 82, 83, 84, 85, 86, 87, 92, 238, 239, 571, 576 lextbl, 74, 228, 229, 553, 554, 556, 557, 558, 559 line ending, 7, 21, 22, 57, 59, 61, 63, 64, 72, 76, 78, 79, 81, 417 literal, 12, 21, 22, 26, 59, 60, 62, 63, 66, 68, 70, 71, 76, 78, 80, 81, 82, 86, 88, 92, 136, 229, 259, 276, 282, 290, 301, 312, 314, 321, 322, 324, 335, 336, 337, 338, 344, 345, 390, 445, 482, 570, 571, 572, 587 literal token, 59, 60, 62, 63, 66, 68, 70, 71, 76, 80, 81, 86, 88, 92, 136, 229, 571, 572 logical operation, 19, 391, 462, 464 login, 553 LR parser, 106, 204, 208, 211, 213, 223, 224, 225, 226, 228, 231, 232, 234, 246, 251, 258, 259, 275, 279, 401, 556, 583 halt, 30, 31, 32, 33, 34, 37, 40, 42, 43, 44, 46, 47, 48, 49, 50, 52, 53, 54, 55, 63, 64, 65, 66, 67, 68, 70, 71, 72, 77, 85, 86, 88, 210, 214, 221, 223 lookahead, 93, 186, 187, 204, 211, 214, 215, 223, 224, 242 reduce, 39, 40, 46, 47, 54, 68, 77, 202, 203, 208, 214, 215, 219, 220, 221, 225, 234, 235, 242, 243, 251, 264, 265, 266, 268, 280, 285, 295, 315, 384, 389, 401, 410 shift, 203, 207, 208, 214, 218, 220, 221, 242, 466, 487, 488, 489, 504 LR Parser Generation completion, 205, 207 goto rule, 207 item, 203, 204, 205, 206, 208, 209, 210, 211, 221, 222, 251, 343, 370 item set, 204, 205, 206, 208, 209, 211, 222

mixed states, 208, 211, 226 LR state machine, 204, 216, 218, 221 lr1p, 228, 229, 230, 231, 232, 234, 240, 246, 553, 554, 556, 557, 558, 559, 583 macro, 8, 14, 58, 167, 397, 469, 473, 497, 508, 509, 545, 546, 547, 555, 562 make, 3, 5, 6, 9, 10, 22, 35, 36, 40, 42, 44, 52, 61, 62, 65, 66, 67, 72, 81, 82, 88, 90, 102, 108, 109, 110, 111, 113, 114, 116, 123, 131, 136, 167, 170, 172, 173, 174, 176, 179, 186, 189, 190, 191, 193, 194, 195, 207, 208, 212, 213, 225, 229, 231, 235, 239, 240, 244, 245, 247, 249, 252, 253, 254, 256, 262, 265, 270, 288, 293, 309, 317, 318, 325, 341, 357, 359, 367, 377, 380, 383, 392, 408, 414, 415, 423, 428, 429, 430, 436, 441, 456, 466, 468, 477, 494, 506, 534, 536, 537, 539, 540, 541, 542, 545, 546, 547, 553, 554, 556, 557, 558, 559, 560, 561, 562, 569, 570, 581, 582, 584, 585, 586, 587, 590 makefile, 82, 115, 216, 227, 253, 256, 476, 535, 537, 545, 547, 558, 559, 560, 561, 581, 583 Marut, Charles, 499 Masm assembler, 4, 13, 62, 472, 491, 499, 555 McCullough, 55 metalanguage, 76 Microsoft, 4, 5, 7, 13, 57, 170, 309, 319, 446, 447, 448, 467, 485, 499, 537, 554, 555, 558, 561, 562, 564, 565 Moore, 55 mov instruction, 14, 15, 16, 17, 18, 179, 180, 182, 183, 258, 259, 260, 261, 262, 266, 267, 268, 269, 282, 284, 285, 286, 287, 289, 290, 292, 296, 297, 298, 301, 302, 303, 304, 367, 368, 369, 370, 375, 376, 377, 378, 386, 390, 391, 392, 397, 399, 403, 468, 469, 471, 472, 473, 474, 475, 476, 477, 480, 482, 483, 484, 485, 487, 489, 490, 493, 494, 495, 496, 497, 509, 510, 585, 587, 588, 590, 591 new, 5, 6, 12, 20, 22, 30, 35, 36, 40, 44, 46, 48, 53, 73, 74, 76, 82, 84, 95, 102, 104, 105, 106, 121, 122, 125, 127, 129, 133, 134, 136, 138, 139, 142, 159, 171, 190, 191, 196, 206, 207, 208, 211, 219, 228, 229, 231, 236, 237, 239, 241, 242, 245, 246, 247, 248, 249, 251, 252, 253, 255, 256, 257, 259, 262, 263, 264, 272,

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 598

273, 274, 276, 277, 278, 279, 280, 281, 282, 283, 287, 289, 290, 291, 293, 312, 320, 329, 331, 343, 350, 355, 359, 364, 367, 368, 376, 391, 394, 395, 398, 399, 402, 404, 405, 406, 407, 408, 410, 414, 415, 418, 421, 423, 424, 425, 427, 431, 434, 435, 439, 441, 442, 447, 453, 454, 455, 456, 457, 466, 474, 485, 488, 489, 505, 534, 540, 542, 553, 554, 556, 557, 558, 559, 560, 565, 581, 583, 585, 590, 592 NIL null pointer in Pascal, 314, 357 nlr1, 102, 209, 216, 222, 225, 228, 229, 231, 246, 553, 554, 556, 557, 558, 559 nmake, 112, 116, 170, 256, 558, 559, 560, 562, 563 nonterminal, 35, 36, 37, 94, 95, 96, 98, 99, 100, 102, 106, 107, 108, 109, 110, 118, 122, 123, 184, 185, 188, 189, 190, 198, 202, 205, 206, 207, 208, 213, 214, 228, 240, 241, 243, 244, 245, 246, 248, 249, 251, 255, 259, 262, 271, 274, 279, 280, 290, 392, 398, 399, 400, 401, 403, 404, 405, 406, 408, 412, 566, 568, 569, 574, 576, 580, 586 normal mode, 8086, 459, 465, 467, 475 number, 4, 5, 6, 8, 11, 13, 15, 16, 19, 20, 21, 23, 27, 28, 29, 33, 34, 46, 47, 54, 55, 58, 59, 60, 61, 62, 66, 67, 71, 78, 79, 81, 82, 84, 88, 89, 91, 92, 97, 102, 103, 109, 110, 113, 114, 125, 127, 128, 131, 135, 143, 158, 159, 160, 161, 163, 166, 167, 169, 171, 172, 173, 174, 175, 176, 177, 178, 179, 182, 185, 186, 187, 191, 193, 204, 205, 207, 208, 209, 211, 212, 214, 218, 225, 226, 232, 233, 234, 237, 238, 241, 242, 245, 247, 248, 266, 267, 268, 270, 271, 273, 274, 282, 289, 299, 300, 302, 305, 306, 308, 309, 310, 312, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 330, 333, 335, 339, 343, 344, 345, 346, 347, 351, 354, 355, 356, 357, 363, 368, 378, 380, 383, 384, 389, 390, 391, 393, 395, 397, 403, 405, 414, 415, 416, 417, 419, 420, 422, 426, 428, 435, 444, 448, 449, 454, 460, 462, 466, 470, 471, 472, 474, 477, 480, 481, 483, 486, 488, 489, 494, 496, 499, 500, 501, 502, 503, 504, 506, 507, 508, 509, 510, 536, 539, 540, 543, 544, 545, 546, 555, 557, 561, 571, 581

one-pass compiler, 284 operand, 29, 124, 166, 173, 174, 177, 180, 259, 267, 268, 269, 270, 290, 299, 300, 307, 309, 316, 325, 475, 477, 479, 486, 496, 497, 505, 506, 508, 589 operator, 11, 26, 27, 28, 29, 41, 53, 62, 63, 65, 77, 83, 96, 103, 104, 106, 116, 119, 120, 121, 123, 124, 166, 167, 173, 174, 175, 176, 179, 245, 253, 254, 258, 259, 263, 268, 269, 270, 271, 274, 275, 276, 278, 279, 280, 282, 288, 289, 290, 291, 294, 295, 304, 306, 308, 312, 313, 314, 315, 316, 325, 327, 328, 329, 359, 390, 393, 394, 395, 396, 417, 421, 424, 425, 434, 435, 440, 448, 484, 486, 500, 535, 586, 588, 589, 590 binary, 96, 121, 166, 167, 179, 251, 259, 267, 284, 313, 440, 589 infix, 123, 173, 174, 176, 177, 179, 197 opt, 228, 553, 554, 556, 557, 558, 559 overloaded name, 312, 433, 434, 439, 440, 443 pairwise disjoint, 184, 193, 194, 223 parenthesizing, 96, 123, 197, 275 parse, 36, 98, 101, 108, 125, 165, 166, 169, 171, 186, 187, 194, 196, 198, 200, 202, 203, 209, 218, 219, 235, 241, 242, 244, 266, 305, 401, 557, 573, 582, 584, 587, 588 parser, 7, 10, 21, 25, 33, 57, 58, 59, 60, 65, 73, 76, 82, 94, 98, 101, 102, 106, 108, 109, 110, 111, 112, 113, 115, 116, 125, 157, 160, 168, 169, 170, 176, 194, 195, 197, 198, 200, 204, 208, 209, 211, 213, 216, 218, 221, 223, 225, 226, 228, 229, 230, 231, 232, 233, 234, 235, 236, 238, 239, 240, 241, 242, 243, 244, 246, 247, 248, 250, 253, 256, 257, 258, 262, 263, 267, 270, 275, 279, 280, 293, 310, 335, 392, 398, 399, 400, 401, 412, 554, 556, 559, 566, 570, 571, 573, 574, 575, 576, 577, 580, 581, 582, 583, 584, 585, 587, 589, 592 parsing bottom-up, 125, 203, 213 recursive descent, 109, 125, 160, 168, 197, 270, 554, 585 syntax function, 575, 577 top-down, 109, 110, 111, 125 Pascal, 3, 4, 9, 11, 12, 13, 19, 20, 21, 22, 25, 26, 33, 36, 61, 62, 63, 65, 72, 75, 79, 80, 81, 101,

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 599

102, 103, 104, 105, 106, 119, 126, 127, 128, 129, 131, 132, 142, 160, 169, 187, 195, 225, 228, 231, 232, 239, 268, 269, 277, 295, 298, 301, 302, 309, 310, 312, 313, 314, 316, 317, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 332, 333, 335, 343, 344, 345, 346, 347, 348, 351, 355, 356, 357, 358, 361, 362, 363, 364, 369, 370, 371, 372, 374, 375, 377, 378, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 393, 399, 401, 403, 406, 409, 410, 411, 476, 479, 480, 487, 490, 491, 492, 493, 494, 497, 507, 508, 509, 554, 555, 557, 561, 562, 563, 564 function, 4, 7, 10, 12, 13, 14, 15, 16, 18, 19, 30, 34, 38, 54, 57, 72, 73, 81, 83, 84, 85, 86, 87, 88, 90, 91, 92, 103, 104, 106, 109, 115, 124, 125, 126, 128, 129, 131, 133, 134, 140, 142, 158, 168, 169, 170, 171, 172, 173, 174, 175, 178, 179, 183, 195, 196, 197, 229, 232, 233, 236, 238, 240, 241, 246, 247, 253, 262, 270, 272, 273, 276, 281, 282, 285, 286, 287, 291, 292, 293, 294, 302, 305, 310, 313, 314, 315, 318, 319, 320, 321, 325, 327, 328, 330, 331, 333, 337, 345, 347, 354, 355, 356, 358, 359, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 374, 375, 376, 377, 378, 380, 382, 386, 390, 391, 392, 393, 396, 397, 399, 402, 403, 405, 407, 409, 410, 412, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 432, 433, 434, 436, 437, 439, 440, 441, 442, 444, 445, 446, 447, 450, 451, 454, 455, 456, 462, 465, 467, 469, 485, 486, 491, 492, 493, 494, 495, 496, 497, 498, 503, 506, 507, 508, 509, 510, 533, 537, 543, 544, 561, 575, 576, 580, 581, 582, 583, 587, 588, 589 grammar, 106 procedure, 12, 13, 14, 15, 103, 104, 105, 106, 126, 129, 183, 220, 242, 317, 354, 355, 356, 362, 363, 365, 371, 374, 375, 380, 381, 382, 393, 485, 495 program, 3, 4, 5, 7, 8, 9, 10, 11, 13, 17, 18, 19, 20, 21, 22, 25, 30, 34, 38, 39, 54, 57, 58, 59, 61, 62, 66, 71, 72, 74, 76, 80, 82,

88, 90, 92, 94, 95, 102, 103, 105, 106, 108, 112, 125, 126, 128, 131, 134, 135, 136, 137, 139, 157, 160, 170, 172, 179, 183, 186, 187, 188, 194, 195, 196, 197, 229, 230, 231, 232, 236, 242, 243, 246, 253, 262, 270, 273, 282, 283, 296, 302, 310, 312, 314, 315, 317, 318, 320, 321, 322, 323, 324, 326, 330, 332, 333, 334, 339, 349, 351, 355, 357, 359, 363, 365, 367, 368, 370, 371, 372, 374, 375, 377, 381, 382, 383, 388, 417, 418, 419, 420, 421, 423, 424, 432, 433, 436, 437, 441, 446, 447, 448, 462, 464, 465, 466, 467, 470, 471, 472, 473, 476, 479, 481, 482, 484, 485, 486, 489, 491, 492, 495, 497, 503, 506, 510, 533, 535, 536, 537, 541, 542, 543, 544, 545, 547, 555, 557, 561, 562, 563, 564, 572, 573, 575, 576, 577, 581, 590 readln, 355 static link, 356, 362, 365, 366, 367, 368, 369, 375, 376, 378, 379 writeln, 11, 12, 14, 16, 17, 18, 128, 230, 231, 232, 277, 282, 283, 355, 365, 396, 397, 492, 510 pass by reference, 106, 141, 365, 424, 426 pass by value, 424, 425, 426 path, 31, 32, 34, 37, 52, 53, 67, 121, 158, 160, 161, 163, 164, 166, 183, 184, 185, 186, 187, 191, 193, 194, 195, 228, 533, 534, 541, 553, 559, 560, 566, 568, 569, 570, 571, 572, 573, 574, 577, 581, 583, 584, 585, 586, 587, 588, 589 peekChar, 73 Pennello, Tom, 234 Pitts, 55 pop instruction, 15, 16, 17, 18, 142, 173, 176, 177, 178, 180, 182, 183, 214, 215, 219, 232, 233, 246, 258, 259, 260, 261, 262, 265, 266, 267, 268, 269, 284, 287, 288, 289, 291, 296, 299, 300, 304, 307, 356, 367, 368, 369, 375, 376, 378, 382, 383, 392, 419, 449, 450, 451, 476, 477, 482, 483, 484, 485, 486, 492, 495, 496, 505, 507, 557, 560, 584, 585, 587, 588, 589, 590, 591 postifx, 62, 116, 167, 173, 174, 175, 176, 197, 266, 300, 327, 586

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 600

precedence, 26, 27, 52, 77, 120, 121, 122, 123, 167, 173, 225, 226, 275, 393, 442, 481, 482, 587 operator, 121, 275 prefix, 26, 110, 111, 198, 200, 201, 202, 209, 212, 213, 320, 401, 445, 454, 456, 457, 497 preprocessor, 7, 8, 9, 57, 58, 59, 428 primary, 500, 587, 588 printable character, 7, 57, 58, 76, 77, 79, 80 printing, 73, 176, 195, 223, 270, 282, 417, 420, 421, 456, 486, 545 production rule, 6, 7, 10, 20, 21, 34, 35, 36, 37, 39, 47, 48, 54, 65, 66, 67, 70, 71, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 129, 158, 166, 167, 183, 189, 190, 191, 198, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 212, 213, 215, 219, 222, 223, 225, 226, 228, 229, 233, 234, 235, 237, 238, 239, 240, 241, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 258, 259, 262, 263, 264, 266, 269, 271, 274, 275, 276, 279, 280, 285, 286, 287, 289, 293, 294, 310, 319, 322, 328, 330, 334, 335, 355, 380, 387, 392, 393, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 410, 412, 423, 482, 545, 546, 547, 554, 569, 586 left member, 34, 35, 36, 94, 100, 107, 118, 134, 202, 205, 215, 234, 235, 244, 245, 246, 251, 262, 274, 280, 283, 286, 296, 360, 589 right member, 34, 36, 94, 100, 106, 107, 109, 110, 118, 134, 203, 204, 205, 212, 215, 219, 228, 233, 235, 239, 244, 245, 246, 247, 248, 251, 255, 258, 262, 263, 275, 280, 284, 286, 296, 392, 585, 589 semantic code, 241 single production, 122, 251, 262, 274, 275, 279, 310 tags, 40, 79, 115, 184, 185, 227, 233, 234, 236, 239, 240, 262, 271, 272, 274, 277, 278, 279, 285, 290, 314, 315, 317, 322, 333, 335, 496, 584 protected mode, 8086, 327, 362, 459, 461, 463, 465, 466, 475, 479, 485, 495, 556

push instruction, 14, 15, 16, 17, 18, 127, 134, 137, 140, 142, 168, 169, 173, 176, 177, 178, 179, 180, 182, 183, 184, 197, 208, 212, 214, 215, 219, 220, 237, 255, 258, 259, 260, 261, 262, 265, 266, 267, 268, 269, 284, 287, 288, 289, 291, 296, 300, 302, 304, 307, 345, 362, 365, 366, 367, 369, 370, 374, 375, 376, 377, 378, 382, 391, 396, 399, 438, 446, 449, 450, 463, 476, 477, 481, 482, 483, 484, 485, 486, 492, 493, 494, 495, 496, 504, 505, 508, 509, 555, 585, 588, 589, 591 push-down stack, 134, 140, 168, 197, 208, 399, 446, 463, 504 putBack, 73, 86 Qparser, 101, 106, 107, 111, 125, 136, 204, 205, 207, 208, 209, 211, 212, 213, 216, 218, 221, 222, 223, 224, 225, 226, 227, 230, 232, 233, 235, 237, 238, 240, 241, 245, 246, 251, 254, 262, 271, 272, 274, 309, 375, 376, 405, 449, 451, 453, 456, 497, 507, 538, 545, 553, 554, 556, 558, 559, 560, 581 qpgen.mak, 558, 559, 561, 562, 563, 564 quad, 267, 317, 463, 470, 479, 483, 497, 506 recursion, 95, 96, 107, 244, 363, 370, 371, 374, 492, 493, 573 recursive, 95, 96, 97, 100, 103, 104, 109, 118, 119, 120, 121, 125, 133, 134, 160, 168, 195, 196, 197, 223, 225, 226, 244, 258, 267, 270, 285, 328, 350, 363, 371, 410, 426, 427, 457, 467, 494, 543, 554, 566, 573, 580, 581, 585 reference, 8, 58, 72, 92, 102, 127, 129, 131, 169, 171, 179, 190, 191, 233, 240, 246, 255, 302, 312, 317, 319, 320, 325, 327, 328, 329, 330, 331, 332, 334, 337, 364, 366, 370, 371, 374, 375, 376, 377, 406, 415, 421, 422, 423, 426, 427, 432, 433, 441, 446, 447, 451, 457, 466, 467, 472, 474, 507, 533, 544, 547, 581, 586 regular expression, 8, 25, 26, 27, 28, 29, 30, 39, 52, 53, 55, 62, 63, 64, 70, 71, 72, 73, 76, 77, 78, 79, 80, 81, 82, 88, 92, 94, 96, 126, 157, 158, 160, 166, 209, 538, 571 Regular expressions alternative, 26, 79, 188, 196, 229, 301, 302, 323, 389, 391, 422, 423, 434, 464, 496, 497, 501, 505, 541

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 601

choice, 20, 22, 26, 27, 29, 96, 102, 109, 138, 139, 140, 158, 160, 166, 185, 186, 193, 194, 203, 310, 314, 335, 396, 412, 481, 494, 534, 573 closure, 26, 53, 54 concatenation, 26, 27, 29, 53, 54, 61, 77, 312, 314, 346, 448 reserved word, 9, 59, 60, 63, 88, 89, 126, 128, 237, 324, 327, 334, 335, 401, 568, 571, 572 ret return instruction, 15, 17, 18, 298, 304, 356, 362, 368, 369, 378, 382, 383, 397, 411, 485, 492, 495, 496, 498, 509, 510, 591 return address, 169, 362, 363, 365, 366, 367, 368, 382, 491, 492, 493, 494, 495, 581 right recursion, 100, 118, 225 scanner, 7, 8, 9, 10, 21, 22, 30, 39, 57, 58, 59, 60, 61, 62, 63, 65, 67, 72, 73, 74, 76, 80, 81, 82, 90, 91, 92, 126, 136, 160, 161, 168, 169, 170, 171, 174, 197, 229, 230, 234, 238, 239, 246, 248, 253, 335, 337, 507, 554, 556, 566, 570, 571, 572, 583, 584, 587, 589, 592 scope, 128, 129, 130, 141, 142, 171, 317, 332, 353, 355, 357, 371, 374, 416, 439, 445, 510 semantics, 25, 75, 106, 114, 213, 226, 227, 228, 229, 232, 233, 235, 236, 238, 239, 240, 241, 242, 243, 246, 247, 248, 249, 250, 251, 254, 258, 259, 262, 263, 264, 270, 271, 280, 293, 294, 355, 382, 398, 400, 403, 404, 408, 554, 557, 564, 584, 585, 586, 587, 588, 589, 592 semType, 76, 81, 92, 229, 234, 236, 237, 238, 240, 247, 271, 272, 274, 277, 281, 293, 332, 333, 334, 335, 396, 399, 403 sentence, 3, 10, 21, 22, 29, 32, 34, 35, 36, 37, 64, 67, 94, 95, 96, 98, 99, 100, 101, 108, 109, 110, 112, 113, 114, 118, 119, 166, 167, 171, 186, 193, 198, 200, 202, 203, 209, 211, 213, 214, 218, 219, 220, 221, 229, 235, 237, 557, 571, 582 sentential form, 35, 36, 95, 96, 97, 98, 107, 108, 109, 110, 198, 200, 202, 203, 208, 212 Sethi, 234, 304, 310 side effect, 362, 363, 380, 397 source language, 4, 5, 7, 11, 13, 20, 21, 22, 57, 229, 230, 269, 362, 446 stack frame, 272, 356, 365, 366, 367, 368, 369, 370, 374, 375, 376, 377, 378

statement, 13, 14, 21, 38, 39, 41, 84, 95, 102, 103, 105, 106, 114, 124, 125, 127, 128, 129, 130, 157, 195, 226, 229, 231, 233, 238, 246, 247, 253, 254, 255, 257, 265, 266, 268, 270, 276, 282, 283, 284, 285, 286, 287, 296, 298, 302, 304, 308, 310, 317, 318, 319, 320, 327, 335, 347, 357, 359, 360, 362, 363, 364, 368, 369, 380, 381, 382, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 425, 444, 448, 490, 491, 492, 556, 557, 576, 580, 585, 588 string, 7, 9, 14, 15, 21, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 46, 47, 52, 57, 58, 59, 60, 61, 63, 66, 70, 73, 74, 75, 76, 79, 80, 81, 82, 88, 89, 91, 92, 93, 95, 96, 100, 106, 107, 108, 110, 111, 113, 115, 118, 127, 129, 131, 132, 135, 137, 138, 140, 141, 160, 164, 174, 175, 184, 185, 187, 198, 202, 212, 228, 231, 232, 234, 237, 238, 239, 242, 243, 255, 256, 257, 258, 260, 261, 273, 275, 277, 279, 281, 282, 312, 314, 317, 319, 320, 322, 324, 330, 333, 334, 335, 336, 337, 338, 340, 341, 345, 346, 347, 349, 350, 359, 383, 396, 402, 404, 406, 408, 419, 428, 429, 438, 439, 441, 442, 445, 446, 447, 448, 449, 450, 453, 454, 456, 457, 470, 473, 494, 503, 506, 537, 538, 544, 547, 571, 573, 580, 583, 584, 585, 587, 592 strings quoted, 232, 240, 539 Stroustrup, Bjarne, 143, 414, 415, 443, 458 subtract instruction, 16, 48, 103, 157, 180, 183, 260, 261, 269, 284, 288, 290, 291, 296, 308, 309, 367, 368, 369, 377, 378, 389, 411, 448, 468, 469, 477, 478, 479, 482, 483, 484, 485, 487, 493, 494, 495, 496, 590, 591 Sun workstations, 502, 533, 534, 547 Symantec, 555 symbol table, 8, 9, 15, 19, 58, 59, 127, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 176, 253, 254, 255, 256, 260, 270, 276, 312, 318, 320, 330, 331, 332, 333, 338, 340, 343, 352, 353, 354, 355, 357, 358, 359, 360, 387, 446, 542, 585, 587 hashing, 135

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 602

syncomp a syngraph compiler, 557, 561, 583, 584, 585, 586, 590 syngraph, 184, 335, 554, 556, 557, 566, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 581, 583, 584, 585, 588, 592 syntax diagram, 10, 20, 102, 106, 129, 157, 158, 159, 160, 161, 164, 165, 168, 169, 171, 173, 183, 184, 186, 187, 188, 189, 190, 191, 193, 194, 195, 196, 197, 258, 554, 556, 557, 566, 569, 572, 573, 574, 575, 576, 577, 580, 581, 582, 585, 586, 587, 589, 592 language, 65 synthesized, 270 tab, 9, 20, 22, 25, 61, 63, 65, 72, 78, 79, 84, 85, 537, 546, 562, 563, 571 target language, 4, 13, 73, 74 telnet, 540, 542 term, 161, 163, 164, 165, 166, 167, 168, 169, 171, 172, 174, 175, 177, 178, 179, 180, 181, 191, 192, 193, 194, 195, 230, 586, 587, 589 terminal, 27, 29, 35, 36, 40, 41, 82, 87, 96, 99, 106, 107, 108, 110, 118, 168, 184, 185, 198, 206, 207, 214, 234, 240, 262, 539, 568, 570, 573, 576, 592 terminator, 22, 97, 407 text file, 7, 8, 9, 22, 23, 57, 58, 59, 72, 73, 77, 80, 112, 216, 246, 347, 535, 536, 537, 539, 540, 542, 582 token, 7, 30, 35, 37, 39, 40, 41, 47, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 96, 97, 102, 105, 106, 107, 108, 109, 110, 140, 158, 160, 161, 163, 164, 165, 166, 168, 169, 170, 171, 172, 174, 175, 178, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 203, 208, 211, 212, 213, 214, 215, 218, 219, 220, 221, 222, 223, 224, 225, 231, 232, 233, 234, 236, 237, 238, 239, 243, 252, 254, 259, 270, 271, 274, 317, 334, 335, 401, 568, 569, 570, 571, 572, 573, 574, 575, 576, 580, 581, 582, 583, 584, 586

lexical, 59, 60, 66, 68, 70, 76, 80, 81, 90, 92, 112, 229, 238, 239, 243, 271, 570, 571, 572, 584 token code, 60, 66, 72, 74, 76, 81, 83, 86, 87, 91, 92, 170, 234, 575, 576, 580 tounix, 23 transitive completion, 189, 190 translator, 4, 5, 7, 8, 21, 30, 31, 38, 54, 57, 58, 60, 173, 197, 226, 227, 229, 230, 243, 246, 414, 554, 556, 557, 558, 560, 566, 584, 585, 590 type, 10, 11, 12, 23, 25, 78, 79, 82, 87, 105, 106, 113, 115, 126, 127, 131, 137, 140, 183, 204, 236, 238, 240, 242, 244, 247, 248, 251, 257, 267, 270, 272, 273, 274, 293, 300, 302, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 336, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 363, 371, 376, 377, 387, 393, 396, 399, 418, 420, 421, 422, 423, 426, 427, 428, 429, 433, 434, 440, 441, 442, 443, 446, 447, 450, 471, 480, 486, 494, 497, 498, 502, 503, 533, 535, 539, 540, 543, 544, 545, 557, 582, 587 array, 8, 19, 58, 231, 234, 312, 316, 319, 320, 331, 348, 433, 447, 544 basic, 321, 323, 324, 330, 332 enumerated, 76, 82, 129, 233, 236, 238, 240, 271, 272, 321, 325, 326, 330, 333, 334, 335, 337, 338, 339, 340, 342, 348, 360, 381, 387, 396, 589 pointer, 7, 57, 76, 82, 84, 86, 87, 91, 104, 125, 132, 133, 134, 136, 141, 142, 143, 171, 213, 236, 237, 238, 240, 244, 245, 247, 248, 251, 252, 262, 270, 272, 273, 274, 275, 276, 279, 280, 282, 293, 294, 295, 314, 315, 320, 321, 327, 328, 329, 330, 331, 332, 333, 334, 337, 338, 339, 340, 341, 342, 344, 345, 347, 349, 350, 351, 352, 354, 355, 357, 358, 360, 366, 367, 368, 369, 374, 376, 382, 393, 394, 397, 401, 402, 403, 405, 409, 418, 419, 421, 425, 426, 427, 428, 431, 434, 435, 436, 437, 441, 442, 443, 445, 446, 447,

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 603

448, 452, 453, 454, 455, 456, 462, 464, 467, 476, 544, 584 record, 104, 320, 321, 326, 327, 328, 329, 330, 333, 347, 348, 349, 350, 351, 353, 354, 359, 369, 393, 457, 544 subrange, 312, 321, 324, 325, 326, 330, 331, 333, 339, 343, 345, 351, 360, 381, 389 type conversion, 82, 302, 313, 315 Ullman, Jeffrey, 55, 203, 234, 304, 310, 361 Unix tools, 22, 540 uplevel reference, 14, 370 useless operations, 284, 310, 398 validity, 85, 193, 197, 349, 354, 481 var, 230, 231, 302, 317, 320, 324, 326, 327, 330, 339, 358, 359, 365, 371, 380, 382, 406, 492 variable, 9, 11, 12, 13, 14, 15, 16, 19, 59, 66, 91, 104, 106, 129, 131, 176, 231, 246, 247, 265, 267, 269, 271, 272, 273, 274, 280, 282, 283, 289, 294, 298, 300, 301, 302, 309, 312, 313,

314, 317, 318, 319, 320, 324, 325, 326, 327, 328, 330, 332, 333, 334, 337, 340, 346, 348, 349, 353, 355, 356, 357, 358, 359, 360, 362, 363, 364, 365, 366, 367, 368, 369, 371, 374, 375, 376, 377, 378, 380, 381, 383, 384, 386, 390, 393, 405, 406, 421, 422, 424, 426, 427, 428, 441, 442, 444, 445, 446, 447, 454, 462, 463, 469, 470, 481, 482, 483, 484, 485, 492, 495, 505, 509, 537, 544, 545, 547, 553, 560, 581, 589, 592 vi editor, 23, 138, 139, 140, 141, 535, 536, 538, 539, 540, 543, 547 Visual Studio Microsoft, 309, 447, 467, 468, 499, 554, 555, 556, 557, 558, 559, 560, 561 white space, 7, 9, 21, 57, 59, 62, 63, 65, 68, 69, 72, 76, 79, 81, 82, 84, 85, 97, 169, 170, 327, 533 yacc, 93, 102, 225, 234, 554 Yu, Ytha, 499

Appendix 7: Syngraph - A Recursive Descent Parser Generator, page 604

You might also like