Data-Structures in Java

Download as pdf or txt
Download as pdf or txt
You are on page 1of 233
At a glance
Powered by AI
The document provides excerpts from a textbook about data structures and algorithms in Java.

The book discusses various data structures and algorithms and their implementations in Java.

Chapter 1 discusses algorithmic complexity analysis and order notation as well as examples of algorithm analysis including linear search, quadratic examples, and divide and conquer approaches.

Data Structures (Into Java)

(Fourth Edition)
Paul N. Hilnger
University of California, Berkeley
Acknowledgments. Thanks to the following individuals for nding many of the
errors in earlier editions: Dan Bonachea, Michael Clancy, Dennis Hall, Zhi Lin,
Amy Mok, Barath Raghavan Yingssu Tsai, and Emily Watt.
Copyright c _2000, 2001, 2002, 2004, 2005, 2006 by Paul N. Hilnger. All rights
reserved.
Contents
1 Algorithmic Complexity 7
1.1 Asymptotic complexity analysis and order notation . . . . . . . . . . 9
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Demonstrating Big-Ohness . . . . . . . . . . . . . . . . . . 13
1.3 Applications to Algorithm Analysis . . . . . . . . . . . . . . . . . . . 13
1.3.1 Linear search . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 Quadratic example . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.3 Explosive example . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.4 Divide and conquer . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.5 Divide and ght to a standstill . . . . . . . . . . . . . . . . . 17
1.4 Amortization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Complexity of Problems . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 A Note on Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Data Types in the Abstract 23
2.1 Iterators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.1 The Iterator Interface . . . . . . . . . . . . . . . . . . . . . . 24
2.1.2 The ListIterator Interface . . . . . . . . . . . . . . . . . . . . 26
2.2 The Java Collection Abstractions . . . . . . . . . . . . . . . . . . . . 26
2.2.1 The Collection Interface . . . . . . . . . . . . . . . . . . . . . 26
2.2.2 The Set Interface . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.3 The List Interface . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.4 Ordered Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3 The Java Map Abstractions . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.1 The Map Interface . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3.2 The SortedMap Interface . . . . . . . . . . . . . . . . . . . . 41
2.4 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5 Managing Partial Implementations: Design Options . . . . . . . . . 46
3 Meeting a Specication 49
3.1 Doing it from Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2 The AbstractCollection Class . . . . . . . . . . . . . . . . . . . . . . 52
3.3 Implementing the List Interface . . . . . . . . . . . . . . . . . . . . . 53
3.3.1 The AbstractList Class . . . . . . . . . . . . . . . . . . . . . 53
3.3.2 The AbstractSequentialList Class . . . . . . . . . . . . . . . . 56
3
4 CONTENTS
3.4 The AbstractMap Class . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.5 Performance Predictions . . . . . . . . . . . . . . . . . . . . . . . . . 60
4 Sequences and Their Implementations 65
4.1 Array Representation of the List Interface . . . . . . . . . . . . . . . 65
4.2 Linking in Sequential Structures . . . . . . . . . . . . . . . . . . . . 69
4.2.1 Singly Linked Lists . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2.2 Sentinels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2.3 Doubly Linked Lists . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Linked Implementation of the List Interface . . . . . . . . . . . . . . 72
4.4 Specialized Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.4.2 FIFO and Double-Ended Queues . . . . . . . . . . . . . . . . 81
4.5 Stack, Queue, and Deque Implementation . . . . . . . . . . . . . . . 81
5 Trees 91
5.1 Expression trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Basic tree primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3 Representing trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.3.1 Root-down pointer-based binary trees . . . . . . . . . . . . . 96
5.3.2 Root-down pointer-based ordered trees . . . . . . . . . . . . . 96
5.3.3 Leaf-up representation . . . . . . . . . . . . . . . . . . . . . . 97
5.3.4 Array representations of complete trees . . . . . . . . . . . . 98
5.3.5 Alternative representations of empty trees . . . . . . . . . . . 99
5.4 Tree traversals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.4.1 Generalized visitation . . . . . . . . . . . . . . . . . . . . . . 101
5.4.2 Visiting empty trees . . . . . . . . . . . . . . . . . . . . . . . 103
5.4.3 Iterators on trees . . . . . . . . . . . . . . . . . . . . . . . . . 104
6 Search Trees 107
6.1 Operations on a BST . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.1.1 Searching a BST . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.1.2 Inserting into a BST . . . . . . . . . . . . . . . . . . . . . . . 109
6.1.3 Deleting items from a BST. . . . . . . . . . . . . . . . . . . . 111
6.1.4 Operations with parent pointers . . . . . . . . . . . . . . . . 113
6.1.5 Degeneracy strikes . . . . . . . . . . . . . . . . . . . . . . . . 113
6.2 Implementing the SortedSet interface . . . . . . . . . . . . . . . . . . 113
6.3 Orthogonal Range Queries . . . . . . . . . . . . . . . . . . . . . . . . 115
6.4 Priority queues and heaps . . . . . . . . . . . . . . . . . . . . . . . . 119
6.5 Heapify Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7 Hashing 129
7.1 Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.2 Open-address hashing . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.3 The hash function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
CONTENTS 5
8 Sorting and Selecting 137
8.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.2 A Little Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.3 Insertion sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Shells sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.5 Distribution counting . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.6 Selection sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.7 Exchange sorting: Quicksort . . . . . . . . . . . . . . . . . . . . . . . 147
8.8 Merge sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.8.1 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.9 Speed of comparison-based sorting . . . . . . . . . . . . . . . . . . . 151
8.10 Radix sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.10.1 LSD-rst radix sorting . . . . . . . . . . . . . . . . . . . . . . 155
8.10.2 MSD-rst radix sorting . . . . . . . . . . . . . . . . . . . . . 155
8.11 Using the library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.12 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9 Balanced Searching 161
9.1 Balance by Construction: B-Trees . . . . . . . . . . . . . . . . . . . 161
9.1.1 B-tree Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.1.2 B-tree deletion . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2 Tries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2.1 Tries: basic properties and algorithms . . . . . . . . . . . . . 168
9.2.2 Tries: Representation . . . . . . . . . . . . . . . . . . . . . . 173
9.2.3 Table compression . . . . . . . . . . . . . . . . . . . . . . . . 174
9.3 Restoring Balance by Rotation . . . . . . . . . . . . . . . . . . . . . 175
9.3.1 AVL Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.4 Skip Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10 Concurrency and Synchronization 185
10.1 Synchronized Data Structures . . . . . . . . . . . . . . . . . . . . . . 186
10.2 Monitors and Orderly Communication . . . . . . . . . . . . . . . . . 187
10.3 Message Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
11 Pseudo-Random Sequences 191
11.1 Linear congruential generators . . . . . . . . . . . . . . . . . . . . . 191
11.2 Additive Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
11.3 Other distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
11.3.1 Changing the range . . . . . . . . . . . . . . . . . . . . . . . 194
11.3.2 Non-uniform distributions . . . . . . . . . . . . . . . . . . . . 195
11.3.3 Finite distributions . . . . . . . . . . . . . . . . . . . . . . . . 196
11.4 Random permutations and combinations . . . . . . . . . . . . . . . . 199
6 CONTENTS
12 Graphs 201
12.1 A Programmers Specication . . . . . . . . . . . . . . . . . . . . . . 202
12.2 Representing graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.2.1 Adjacency Lists . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.2.2 Edge sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
12.2.3 Adjacency matrices . . . . . . . . . . . . . . . . . . . . . . . . 209
12.3 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
12.3.1 Marking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
12.3.2 A general traversal schema. . . . . . . . . . . . . . . . . . . . 211
12.3.3 Generic depth-rst and breadth-rst traversal . . . . . . . . . 212
12.3.4 Topological sorting. . . . . . . . . . . . . . . . . . . . . . . . 212
12.3.5 Minimum spanning trees . . . . . . . . . . . . . . . . . . . . . 213
12.3.6 Shortest paths . . . . . . . . . . . . . . . . . . . . . . . . . . 216
12.3.7 Kruskals algorithm for MST . . . . . . . . . . . . . . . . . . 218
Chapter 1
Algorithmic Complexity
The obvious way to answer to the question How fast does such-and-such a program
run? is to use something like the UNIX time command to nd out directly. There
are various possible objections to this easy answer. The time required by a program
is a function of the input, so presumably we have to time several instances of the
command and extrapolate the result. Some programs, however, behave ne for most
inputs, but sometimes take a very long time; how do we report (indeed, how can we
be sure to notice) such anomalies? What do we do about all the inputs for which we
have no measurements? How do we validly apply results gathered on one machine
to another machine?
The trouble with measuring raw time is that the information is precise, but
limited: the time for this input on this conguration of this machine. On a dierent
machine whose instructions take dierent absolute or relative times, the numbers
dont necessarily apply. Indeed, suppose we compare two dierent programs for
doing the same thing on the same inputs and the same machine. Program A may
turn out faster than program B. This does not imply, however, that program A will
be faster than B when they are run on some other input, or on the same input, but
some other machine.
In mathematese, we might say that a raw time is the value of a function
C
r
(I, P, M) for some particular input I, some program P, and some platform
M (platform here is a catchall term for a combination of machine, operating sys-
tem, compiler, and runtime library support). Ive invented the function C
r
here to
mean the raw cost of. . . . We can make the gure a little more informative by
summarizing over all inputs of a particular size
C
w
(N, P, M) = max
|I|=N
C
r
(I, P, M),
where [I[ denotes the size of input I. How one denes the size depends on the
problem: if I is an array to be sorted, for example, [I[ might denote I.length. We
say that C
w
measures worst-case time of a program. Of course, since the number
of inputs of a given size could be very large (the number of arrays of 5 ints, for
example, is 2
160
> 10
48
), we cant directly measure C
w
, but we can perhaps estimate
it with the help of some analysis of P. By knowing worst-case times, we can make
7
8 CHAPTER 1. ALGORITHMIC COMPLEXITY
conservative statements about the running time of a program: if the worst-case
time for input of size N is T, then we are guaranteed that P will consume no more
than time T for any input of size N.
But of course, it always possible that our program will work ne on most inputs,
but take a really long time on one or two (unlikely) inputs. In such cases, we might
claim that C
w
is too harsh a summary measure, and we should really look at an
average time. Assuming all values of the input, I, are equally likely, the average
time is
C
a
(N, P, M) =

|I|=N
C
r
(I, P, M)
N
Fair this may be, but it is usually very hard to compute. In this course, there-
fore, I will say very little about average cases, leaving that to your next course on
algorithms.
Weve summarized over inputs by considering worst-case times; now lets con-
sider how we can summarize over machines. Just as summarizing over inputs
required that we give up some informationnamely, performance on particular
inputsso summarizing over machines requires that we give up information on
precise performance on particular machines. Suppose that two dierent models of
computer are running (dierent translations of) the same program, performing the
same steps in the same order. Although they run at dierent speeds, and possibly
execute dierent numbers of instructions, the speeds at which they perform any
particular step tend to dier by some constant factor. By taking the largest and
smallest of these constant factors, we can put bounds around the dierence in their
overall execution times. (The argument is not really this simple, but for our pur-
poses here, it will suce.) That is, the timings of the same program on any two
platforms will tend to dier by no more than some constant factor over all possible
inputs. If we can nail down the timing of a program on one platform, we can use it
for all others, and our results will only be o by a constant factor.
But of course, 1000 is a constant factor, and you would not normally be in-
sensitive to the fact that Brand X program is 1000 times slower than Brand Y.
There is, however, an important case in which this sort of characterization is use-
ful: namely, when we are trying to determine or compare the performance of algo-
rithmsidealized procedures for performing some task. The distinction between al-
gorithm and program (a concrete, executable procedure) is somewhat vague. Most
higher-level programming languages allow one to write programs that look very
much like the algorithms they are supposed to implement. The distinction lies in
the level of detail. A procedure that is cast in terms of operations on sets, with
no specic implementation given for these sets, probably qualies as an algorithm.
When talking about idealized procedures, it doesnt make a great deal of sense to
talk about the number of seconds they take to execute. Rather, we are interested
in what I might call the shape of an algorithms behavior: such questions as If we
double the size of the input, what happens to the execution time? Given that kind
of question, the particular units of time (or space) used to measure the performance
of an algorithm are unimportantconstant factors dont matter.
1.1. ASYMPTOTIC COMPLEXITY ANALYSIS AND ORDER NOTATION 9
If we only care about characterizing the speed of an algorithm to within a
constant factor, other simplications are possible. We need no longer worry about
the timing of each little statement in the algorithm, but can measure time using
any convenient marker step. For example, to do decimal multiplication in the
standard way, you multiply each digit of the multiplicand by each digit of the
multiplier, and perform roughly one one-digit addition with carry for each of these
one-digit multiplications. Counting just the one-digit multiplications, therefore, will
give you the time within a constant factor, and these multiplications are very easy
to count (the product of the numbers of digits in the operands).
Another characteristic assumption in the study of algorithmic complexity (i.e.,
the time or memory consumption of an algorithm) is that we are interested in typical
behavior of an idealized program over the entire set of possible inputs. Idealized
programs, of course, being ideal, can operate on inputs of any possible size, and most
possible sizes in the ideal world of mathematics are extremely large. Therefore, in
this kind of analysis, it is traditional not to be interested in the fact that a particular
algorithm does very well for small inputs, but rather to consider its behavior in
the limit as input gets very large. For example, suppose that one wanted to
analyze algorithms for computing to any given number of decimal places. I can
make any algorithm look good for inputs up to, say, 1,000,000 by simply storing
the rst 1,000,000 digits of in an array and using that to supply the answer
when 1,000,000 or fewer digits are requested. If you paid any attention to how my
program performed for inputs up to 1,000,000, you could be seriously misled as to
the cleverness of my algorithm. Therefore, when studying algorithms, we look at
their asymptotic behaviorhow they behave as they input size goes to innity.
The result of all these considerations is that in considering the time complexity
of algorithms, we may choose any particular machine and count any convenient
marker step, and we try to nd characterizations that are true asymptoticallyout
to innity. This implies that our typical complexity measure for algorithms will
have the form C
w
(N, A)meaning the worst-case time over all inputs of size N
of algorithm A (in some units). Since the algorithm will be understood in any
particular discussion, we will usually just write C
w
(N) or something similar. So the
rst thing we need to describe algorithmic complexity is a way to characterize the
asymptotic behavior of functions.
1.1 Asymptotic complexity analysis and order notation
As it happens, there is a convenient notational toolknown collectively as order
notation for order of growthfor describing the asymptotic behavior of functions.
It may be (and is) used for any kind of integer- or real-valued functionnot just
complexity functions. Youve probably seen it used in calculus courses, for example.
We write
f(n) O(g(n))
(aloud, this is f(n) is in big-Oh of g(n)) to mean that the function f is eventually
10 CHAPTER 1. ALGORITHMIC COMPLEXITY
2[g(n)[
0.5[g(n)[
[f(n)[
[h(n)[
f
n
(a)
n = M
n
(b)
[g

(n)[
[h

(n)[
Figure 1.1: Illustration of big-Oh notation. In graph (a), we see that [f(n)[ 2[g(n)[
for n > M, so that f(n) O(g(n)) (with K = 2). Likewise, h(n) O(g(n)),
illustrating the g can be a very over-cautious bound. The function f is also bounded
below by both g (with, for example, K = 0.5 and M any value larger than 0) and by
h. That is, f(n) (g(n)) and f(n) (h(n)). Because f is bounded above and
below by multiples of g, we say f(n) (g(n)). On the other hand, h(n) , (g(n)).
In fact, assuming that g continues to grow as shown and h to shrink, h(n) o(g(n)).
Graph (b) shows that o() is not simply the set complement of (); h

(n) , (g

(n)),
but h

(n) , o(g

(n)), either.
bounded by some multiple of [g(n)[. More precisely, f(n) O(g(n)) i
[f(n)[ K [g(n)[, for all n > M,
for some constants K > 0 and M. That is, O(g(n)) is the set of functions that
grow no more quickly than [g(n)[ does as n gets suciently large. Somewhat
confusingly, f(n) here does not mean the result of applying f to n, as it usually
does. Rather, it is to be interpreted as the body of a function whose parameter is n.
Thus, we often write things like O(n
2
) to mean the set of all functions that grow
no more quickly than the square of their argument
1
. Figure 1.1a gives an intuitive
idea of what it means to be in O(g(n)).
Saying that f(n) O(g(n)) gives us only an upper bound on the behavior of f.
For example, the function h in Figure 1.1aand for that matter, the function that
1
If we wanted to be formally correct, wed use lambda notation to represent functions (such as
Scheme uses) and write instead O(n. n
2
), but Im sure you can see how such a degree of rigor
would become tedious very soon.
1.2. EXAMPLES 11
is 0 everywhereare both in O(g(n)), but certainly dont grow like g. Accordingly,
we dene f(n) (g(n)) i for all n > M, [f(n)[ K[g(n)[ for n > M, for
some constants K > 0 and M. That is, (g(n)) is the set of all functions that
grow at least as fast as g beyond some point. A little algebra suces to show the
relationship between O() and ():
[f(n)[ K[g(n)[ [g(n)[ (1/K) [f(n)[
so
f(n) (g(n)) g(n) O(f(n))
Because of our cavalier treatment of constant factors, it is possible for a function
f(n) to be bounded both above and below by another function g(n): f(n) O(g(n))
and f(n) (g(n)). For brevity, we write f(n) (g(n)), so that (g(n)) =
O(g(n)) (g(n)).
Just because we know that f(n) O(g(n)), we dont necessarily know that
f(n) gets much smaller than g(n), or even (as illustrated in Figure 1.1a) that it
is ever smaller than g(n). We occasionally do want to say something like h(n)
becomes negligible compared to g(n). You sometimes see the notation h(n) g(n),
meaning h(n) is much smaller than g(n), but this could apply to a situation where
h(n) = 0.001g(n). Not being interested in mere constant factors like this, we need
something stronger. A traditional notation is little-oh, dened as follows.
h(n) o(g(n)) lim
n
h(n)/g(n) = 0.
Its easy to see that if h(n) o(g(n)), then h(n) , (g(n)); no constant K can
work in the denition of (). It is not the case, however, that all functions that
are outside of (g(n)) must be in o(g(n)), as illustrated in Figure 1.1b.
1.2 Examples
You may have seen the big-Oh notation already in calculus courses. For example,
Taylors theorem tells us
2
that (under appropriate conditions)
f(x) =
x
n
n!
f
[n]
(y)

error term
+

0k<n
f
[k]
(0)
x
k
k!

approximation
for some y between 0 and x, where f
[k]
represents the k
th
derivative of f. Therefore,
if g(x) represents the maximum absolute value of f
[n]
between 0 and x, then we
could also write the error term as
f(x)

0k<n
f
[k]
(0)
x
k
k!
O(
x
n
n!
g(x)) = O(x
n
g(x))
2
Yes, I know its a Maclaurin series here, but its still Taylors theorem.
12 CHAPTER 1. ALGORITHMIC COMPLEXITY
f(n) Is contained in Is not contained in
1, 1 + 1/n O(10000), O(

n), O(n), O(1/n), O(e


n
)
O(n
2
), O(lg n), O(1 1/n)
(1), (1/n), (1 1/n) (n), (

n), (lg n), (n


2
)
(1), (1 1/n) (n), (n
2
), (lg n), (

n)
o(n), o(

n), o(n
2
) o(100 +e
n
), o(1)
log
k
n, log
k
n|, O(n), O(n

), O(

n), O(log
k
n) O(1)
log
k
n| O(log
k
n|), O(n/ log
k
n)
(1), (log
k
n), (log
k
n|) (n

), (

n)
(log
k
n), (log
k
n|), (log
2
k
n), (log
k
n +n)
(log
k
n + 1000)
o(n), o(n

)
n, 100n + 15 O(.0005n 1000), O(n
2
), O(10000), O(lg n),
O(nlg n) O(n n
2
/10000), O(

n)
(50n + 1000), (

n), (n
2
), (nlg n)
(n + lg n), (1/n)
(50n + 100), (n + lg n) (n
2
), (1)
o(n
3
), o(nlg n) o(1000n), o(n
2
sin n)
n
2
, 10n
2
+n O(n
2
+ 2n + 12), O(n
3
), O(n), O(nlg n), O(1)
O(n
2
+

n) o(50n
2
+ 1000)
(n
2
+ 2n + 12), (n), (1), (n
3
), (n
2
lg n)
(nlg n)
(n
2
+ 2n + 12), (n
2
+ lg n) (n), (n sinn)
n
p
O(p
n
), O(n
p
+ 1000n
p1
) O(n
p1
), O(1)
(n
p
), (n
p+
), (p
n
)
(n
p
+n
p
) (n
p+
), (1)
o(p
n
), o(n!), o(n
p+
) o(p
n
+n
p
)
2
n
, 2
n
+n
p
O(n!), O(2
n
n
p
), O(3
n
), O(2
n+p
) O(n
p
), O((2 )
n
)
(n
p
), ((2 )
n
), ((2 +)
n
), (n!)
(2
n
+n
p
) (2
2n
)
o(n2
n
), o(n!), o(2
n+
), o((2 +)
n
)
Table 1.1: Some examples of order relations. In the above, > 0, 0 1, p > 1,
and k, k

> 1.
1.3. APPLICATIONS TO ALGORITHM ANALYSIS 13
for xed n. This is, of course, a much weaker statement than the original (it allows
the error to be much bigger than it really is).
Youll often seen statements like this written with a little algebraic manipulation:
f(x)

0k<n
f
[k]
(0)
x
k
k!
+O(x
n
g(x)).
To make sense of this sort of statement, we dene addition (and so on) between
functions (a, b, etc.) and sets of functions (A, B, etc.):
a +b = x.a(x) +b(x)
A +B = a +b [ a A, b B
A+b = a +b [ a A
a +B = a +b [ b B
Similar denitions apply for multiplication, subtraction, and division. So if a is

x
and b is lg x, then a + b is a function whose value is

x + lg x for every (postive)
x. O(a(x)) +O(b(x)) (or just O(a) +O(b)) is then the set of functions you can get
by adding a member of O(

x) to a member of O(lg x). For example, O(a) contains


5

x+3 and O(b) contains 0.01 lg x16, so O(a)+O(b) contains 5

x+0.01 lg k13,
among many others.
1.2.1 Demonstrating Big-Ohness
Suppose we want to show that 5n
2
+ 10

n O(n
2
). That is, we need to nd K
and M so that
[5n
2
+ 10

n[ [Kn
2
[, for n > M.
We realize that n
2
grows faster than

n, so it eventually gets bigger than 10

n as
well. So perhaps we can take K = 6 and nd M > 0 such that
5n
2
+ 10

n 5n
2
+n
2
= 6n
2
To get 10

n < n
2
, we need 10 < n
3/2
, or n > 10
2/3
4.7. So choosing M > 5
certainly works.
1.3 Applications to Algorithm Analysis
In this course, we will be usually deal with integer-valued functions arising from
measuring the complexity of algorithms. Table 1.1 gives a few common examples
of orders that we deal with and their containment relations, and the sections below
give examples of simple algorithmic analyses that use them.
14 CHAPTER 1. ALGORITHMIC COMPLEXITY
1.3.1 Linear search
Lets apply all of this to a particular program. Heres a tail-recursive linear search
for seeing if a particular value is in a sorted array:
/** True iff X is one of A[k]...A[A.length-1].
* Assumes A is increasing, k>= 0. */
static boolean isIn (int[] A, int k, int X) {
if (k >= A.length)
return false;
else if (A[k] > X)
return false;
else if (A[k] == X)
return true;
else
return isIn (A, k+1, X);
}
This is essentially a loop. As a measure of its complexity, lets dene C
isIn
(N)
as the maximum number of instructions it executes for a call with k = 0 and
A.length= N. By inspection, you can see that such a call will execute the rst if
test up to N + 1 times, the second and third up to N times, and the tail-recursive
call on isIn up to N times. With one compiler
3
, each recursive call of isIn executes
at most 14 instructions before returning or tail-recursively calling isIn. The initial
call executes 18. That gives a total of at most 14N +18 instructions. If instead we
count the number of comparisons k>=A.length, we get at most N +1. If we count
the number of comparisons against X or the number of fetches of A[0], we get at
most 2N. We could therefore say that the function giving the largest amount of
time required to process an input of size N is either in O(14N + 18), O(N + 1),
or O(2N). However, these are all the same set, and in fact all are equal to O(N).
Therefore, we may throw away all those messy integers and describe C
isIn
(N) as
being in O(N), thus illustrating the simplifying power of ignoring constant factors.
This bound is a worst-case time. For all arguments in which X<=A[0], the isIn
function runs in constant time. That time boundthe best-case boundis seldom
very useful, especially when it applies to so atypical an input.
Giving an O() bound to C
isIn
(N) doesnt tell us that isIn must take time
proportional to N even in the worst case, only that it takes no more. In this
particular case, however, the argument used above shows that the worst case is, in
fact, at least proportional to N, so that we may also say that C
isIn
(N) (N).
Putting the two results together, C
isIn
(N) (N).
In general, then, asymptotic analysis of the space or time required for a given
algorithm involves the following.
Deciding on an appropriate measure for the size of an input (e.g., length of
an array or a list).
3
a version of gcc with the -O option, generating SPARC code for a Sun Sparcstation IPC
workstation.
1.3. APPLICATIONS TO ALGORITHM ANALYSIS 15
Choosing a representative quantity to measureone that is proportional to
the real space or time required.
Coming up with one or more functions that bound the quantity weve decided
to measure, usually in the worst case.
Possibly summarizing these functions by giving O(), (), or () character-
izations of them.
1.3.2 Quadratic example
Here is a bit of code for sorting integers:
static void sort (int[] A) {
for (int i = 1; i < A.length; i += 1) {
int x = A[i];
int j;
for (j = i; j > 0 && x < A[j-1]; j -= 1)
A[j] = A[j-1];
A[j] = x;
}
}
If we dene C
sort
(N) as the worst-case number of times the comparison x < A[j-1]
is executed for N = A.length, we see that for each value of i from 1 to A.length-1,
the program executes the comparison in the inner loop (on j) at most i times.
Therefore,
C
sort
(N) = 1 + 2 +. . . +N 1
= N(N 1)/2
(N
2
)
This is a common pattern for nested loops.
1.3.3 Explosive example
Consider a function with the following form.
static int boom (int M, int X) {
if (M == 0)
return H (X);
return boom (M-1, Q(X))
+ boom (M-1, R(X));
}
and suppose we want to compute C
boom
(M)the number of times Q is called for
a given M in the worst case. If M = 0, this is 0. If M > 0, then Q gets executed
once in computing the argument of the rst recursive call, and then it gets executed
however many times the two inner calls of boom with arguments of M 1 execute
16 CHAPTER 1. ALGORITHMIC COMPLEXITY
it. In other words,
C
boom
(0) = 0
C
boom
(i) = 2C
boom
(i 1) + 1
A little mathematical massage:
C
boom
(M) = 2C
boom
(M 1) + 1,
for M 1
= 2(2C
boom
(M 2) + 1) + 1,
for M 2
.
.
.
= 2( (2

M
0 + 1) + 1) + 1

M
=

0jM1
2
j
= 2
M
1
and so C
boom
(M) (2
M
).
1.3.4 Divide and conquer
Things become more interesting when the recursive calls decrease the size of pa-
rameters by a multiplicative rather than an additive factor. Consider, for example,
binary search.
/** Returns true iff X is one of
* A[L]...A[U]. Assumes A increasing,
* L>=0, U-L < A.length. */
static boolean isInB (int[] A, int L, int U, int X) {
if (L > U)
return false;
else {
int m = (L+U)/2;
if (A[m] == X)
return true;
else if (A[m] > X)
return isInB (A, L, m-1, X);
else
return isInB (A, m+1, U, X);
}
}
The worst-case time here depends on the number of elements of A under consid-
eration, U L + 1, which well call N. Lets use the number of times the rst line
is executed as the cost, since if the rest of the body is executed, the rst line also
had to have been executed
4
. If N > 1, the cost of executing isInB is 1 comparison
4
For those of you seeking arcane knowledge, we say that the test L>U dominates all other
statements.
1.3. APPLICATIONS TO ALGORITHM ANALYSIS 17
of L and U followed by the cost of executing isInB either with (N 1)/2| or with
(N 1)/2| as the new value of N
5
. Either quantity is no more than (N 1)/2|.
If N 1, there are two comparisons against N in the worst case.
Therefore, the following recurrence describes the cost, C
isInB
(i), of executing
this function when U L + 1 = i.
C
isInB
(1) = 2
C
isInB
(i) = 1 +C
isInB
((i 1)/2|), i > 1.
This is a bit hard to deal with, so lets again make the reasonable assumption that
the value of the cost function, whatever it is, must increase as N increases. Then
we can compute a cost function, C

isInB
that is slightly larger than C
isInB
, but
easier to compute.
C

isInB
(1) = 2
C

isInB
(i) = 1 +C

isInB
(i/2), i > 1 a power of 2.
This is a slight over-estimate of C
isInB
, but that still allows us to compute upper
bounds. Furthermore, C

isInB
is dened only on powers of two, but since isInBs
cost increases as N increases, we can still bound C
isInB
(N) conservatively by
computing C

isInB
of the next higher power of 2. Again with the massage:
C

isInB
(i) = 1 +C

isInB
(i/2), i > 1 a power of 2.
= 1 + 1 +C

isInB
(i/4), i > 2 a power of 2.
.
.
.
= 1 + + 1

lg N
+2
The quantity lg N is the logarithm of N base 2, or roughly the number of times one
can divide N by 2 before reaching 1. In summary, we can say C
isIn
(N) O(lg N).
Similarly, one can in fact derive that C
isIn
(N) (lg N).
1.3.5 Divide and ght to a standstill
Consider now a subprogram that contains two recursive calls.
static void mung (int[] A, L, U) {
if (L < U) {
int m = (L+U)/2;
mung (A, L, m);
mung (A, m+1, U);
}
}
5
The notation x means the result of rounding x down (toward ) to an integer, and x
means the result of rounding x up to an integer.
18 CHAPTER 1. ALGORITHMIC COMPLEXITY
We can approximate the arguments of both of the internal calls by N/2 as before,
ending up with the following approximation, C
mung
(N), to the cost of calling mung
with argument N = U L+1 (we are counting the number of times the test in the
rst line executes).
C
mung
(1) = 3
C
mung
(i) = 1 + 2C
mung
(i/2), i > 1 a power of 2.
So,
C
mung
(N) = 1 + 2(1 + 2C
mung
(N/4)), N > 2 a power of 2.
.
.
.
= 1 + 2 + 4 +. . . +N/2 +N 3
This is a sum of a geometric series (1 +r +r
2
+ +r
m
), with a little extra added
on. The general rule for geometric series is

0km
r
k
= (r
m+1
1)/(r 1)
so, taking r = 2,
C
mung
(N) = 4N 1
or C
mung
(N) (N).
1.4 Amortization
So far, we have considered the time spent by individual operations, or individual
calls on a certain function of interest. Sometimes, however, it is fruitful to consider
the cost of whole sequence of calls, especially when each call aects the cost of later
calls.
Consider, for example, a simple binary counter. Incrementing this counter causes
it to go through a sequence like this:
0 0 0 0 0
0 0 0 0 1
0 0 0 1 0
0 0 0 1 1
0 0 1 0 0

0 1 1 1 1
1 0 0 0 0

Each step consists of ipping a certain number of bits, converting bit b to 1 b.
More precisely, the algorithm for going from one step to another is
1.5. COMPLEXITY OF PROBLEMS 19
Increment: Flip the bits of the counter from right to left, up to and including the
rst 0-bit encountered (if any).
Clearly, if we are asked to give a worst-case bound on the cost of the increment
operation for an N-bit counter (in number of ips), wed have to say that it is
(N): all the bits can be ipped. Using just that bound, wed then have to say
that the cost of performing M increment operations is (M N).
But the costs of consecutive increment operations are related. For example, if
one increment ips more than one bit, the next increment will always ip exactly
one (why?). In fact, if you consider the pattern of bit changes, youll see that the
units (rightmost) bit ips on every increment, the 2s bit on every second increment,
the 4s bit on every fourth increment, and in general, then 2
k
s bit on every (2
k
)
th
increment. Therefore, over any sequence of M consecutive increments, starting at
0, there will be
M

units ips
+ M/2|

2s ips
+ M/4|

4s ips
+. . . +M/2
n
|

2
n
s ips
, where n = lg M|
= 2
n
+ 2
n1
+ 2
n2
+. . . + 1

=2
n+1
1
+(M 2
n
)
= 2
n
1 +M
< 2M ips
In other words, this is the same result we would get if we performed M incre-
ments each of which had a worst-case cost of 2 ips, rather than N. We call 2 ips
the amortized cost of an increment. To amortize in the context of algorithms is to
treat the cost of each individual operation in a sequence as if it were spread out
among all the operations in the sequence
6
. Any particular increment might take up
to N ips, but we treat that as N/M ips credited to each increment operation in
the sequence (and likewise count each increment that takes only one ip as 1/M ip
for each increment operation). The result is that we get a more realistic idea of how
much time the entire program will take; simply multiplying the ordinary worst-case
time by M gives us a very loose and pessimistic estimate. Nor is amortized cost the
same as average cost; it is a stronger measure. If a certain operation has a given
average cost, that leaves open the possibility that there is some unlikely sequence
of inputs that will make it look bad. A bound on amortized worst-case cost, on the
other hand, is guaranteed to hold regardless of input.
1.5 Complexity of Problems
So far, I have discussed only the analysis of an algorithms complexity. An algo-
rithm, however, is just a particular way of solving some problem. We might therefore
6
The word amortize comes from an Old French word meaning to death. The original mean-
ing from which the computer-science usage comes, is to gradually write o the initial cost of
something.
20 CHAPTER 1. ALGORITHMIC COMPLEXITY
consider asking for complexity bounds on the problems complexity. That is, can we
bound the complexity of the best possible algorithm? Obviously, if we have a partic-
ular algorithm and its time complexity is O(f(n)), where n is the size of the input,
then the complexity of the best possible algorithm must also be O(f(n)). We call
f(n), therefore, an upper bound on the (unknown) complexity of the best-possible
algorithm. But this tells us nothing about whether the best-possible algorithm is
any faster than thisit puts no lower bound on the time required for the best al-
gorithm. For example, the worst-case time for isIn is (N). However, isInB is
much faster. Indeed, one can show that if the only knowledge the algorithm can
have is the result of comparisons between X and elements of the array, then isInB
has the best possible bound (it is optimal ), so that the entire problem of nding an
element in an ordered array has worst-case time (lg N).
Putting an upper bound on the time required to perform some problem simply
involves nding an algorithm for the problem. By contrast, putting a good lower
bound on the required time is much harder. We essentially have to prove that no
algorithm can have a better execution time than our bound, regardless of how much
smarter the algorithm designer is than we are. Trivial lower bounds, of course, are
easy: every problems worst-case time is (1), and the worst-case time of any prob-
lem whose answer depends on all the data is (N), assuming that ones idealized
machine is at all realistic. Better lower bounds than those, however, require quite
a bit of work. All the better to keep our theoretical computer scientists employed.
1.6 A Note on Notation
Other authors use notation such as f(n) = O(n
2
) rather than f(n) O(n
2
). I dont
because I consider it nonsensical. To justify the use of =, one either has to think
of f(n) as a set of functions (which it isnt), or think of O(n
2
) as a single function
that diers with each separate appearance of O(n
2
) (which is bizarre). I can see no
disadvantages to using , which makes perfect sense, so thats what I use.
Exercises
1.1. Demonstrate the following, or give counter-examples where indicated. Show-
ing that a certain O() formula is true means producing suitable K and M for
the denition at the beginning of 1.1. Hint: sometimes it is useful to take the
logarithms of two functions you are comparing.
a. O(max([f
0
(n)[, [f
1
(n)[)) = O(f
0
(n)) +O(f
1
(n)).
b. If f(n) is a polynomial in n, then lg f(n) O(lg n).
c. O(f(n) + g(n)) = O(f(n)) + O(g(n)). This is a bit of trick question, really,
to make you look at the denitions carefully. Under what conditions is the
equation true?
d. There is a function f(x) > 0 such that f(x) , O(x) and f(x) , (x).
1.6. A NOTE ON NOTATION 21
e. There is a function f(x) such that f(0) = 0, f(1) = 100, f(2) = 10000, f(3) =
10
6
, but f(n) O(n).
f. n
3
lg n O(n
3.0001
).
g. There is no constant k such that n
3
lg n (n
k
).
1.2. Show each of the following false by exhibiting a counterexample. Assume
that f and g are any real-valued functions.
a. O(f(x) s(x)) = o(f(x)), assuming lim
x
s(x) = 0.
b. If f(x) O(x
3
) and g(x) O(x) then f(x)/g(x) O(x
2
).
c. If f(x) (x) and g(x) (x) then f(x) +g(x) (x).
d. If f(100) = 1000 and f(1000) = 1000000 then f cannot be O(1).
e. If f
1
(x), f
2
(x), . . . are a bunch of functions that are all in (1), then
F(x) =

1iN
f
i
(x) (N).
22 CHAPTER 1. ALGORITHMIC COMPLEXITY
Chapter 2
Data Types in the Abstract
Most of the classical data structures covered in courses like this represent some
sort of collection of data. That is, they contain some set or multiset
1
of values,
possibly with some ordering on them. Some of these collections of data are asso-
ciatively indexed; they are search structures that act like functions mapping certain
indexing values (keys) into other data (such as names into street addresses).
We can characterize the situation in the abstract by describing sets of opera-
tions that are supported by dierent data structuresthat is by describing possible
abstract data types. From the point of view of a program that needs to represent
some kind of collection of data, this set of operations is all that one needs to know.
For each dierent abstract data type, there are typically several possible imple-
mentations. Which you choose depends on how much data your program has to
process, how fast it has to process the data, and what constraints it has on such
things as memory space. It is a dirty little secret of the trade that for quite a few
programs, it hardly matters what implementation you choose. Nevertheless, the
well-equipped programmer should be familiar with the available tools.
I expect that many of you will nd this chapter frustrating, because it will talk
mostly about interfaces to data types without talking very much at all about the
implementations behind them. Get used to it. After all, the standard library behind
any widely used programming language is presented to you, the programmer, as a
set of interfacesdirections for what parameters to pass to each function and some
commentary, generally in English, about what it does. As a working programmer,
you will in turn spend much of your time producing modules that present the same
features to your clients.
2.1 Iterators
If we are to develop some general notion of a collection of data, there is at least one
generic question well have to answer: how are we going to get items out of such a
collection? You are familiar with one kind of collection alreadyan array. Getting
1
A multiset or bag is like a set except that it may contain multiple copies of a particular data
value. That is, each member of a multiset has a multiplicity: a number of times that it appears.
23
24 CHAPTER 2. DATA TYPES IN THE ABSTRACT
items out of an array is easy; for example, to print the contents of an array, you
might write
for (int i = 0; i < A.length; i += 1)
System.out.print (A[i] + ", ");
Arrays have a natural notion of an n
th
element, so such loops are easy. But what
about other collections? Which is the rst penney in a jar of penneys? Even if
we do arbitrarily choose to give every item in a collection a number, we will nd
that the operation fetch the n
th
item may be expensive (consider lists of things
such as in Scheme).
The problem with attempting to impose a numbering on every collection of items
as way to extract them is that it forces the implementor of the collection to provide
a more specic tool than our problem may require. Its a classic engineering trade-
o: satisfying one constraint (that one be able to fetch the n
th
item) may have
other costs (fetching all items one by one may become expensive).
So the problem is to provide the items in a collection without relying on indices,
or possibly without relying on order at all. Java provides two conventions, realized as
interfaces. The interface java.util.Iterator provides a way to access all the items
in a collection in some order. The interface java.util.ListIterator provides a
way to access items in a collection in some specic order, but without assigning an
index to each item
2
.
2.1.1 The Iterator Interface
The Java library denes an interface, java.util.Iterator, shown in Figure 2.1,
that captures the general notion of something that sequences through all items
in a collection without any commitment to order. This is only a Java interface;
there is no implementation behind it. In the Java library, the standard way for a
class that represents a collection of data items to provide a way to sequence through
those items is to dene a method such as
Iterator<SomeType> iterator () { ... }
that allocates and returns an Iterator (Figure 3.3 includes an example). Often the
actual type of this iterator will be hidden (even private); all the user of the class
needs to know is that the object returned by iterator provides the operations
hasNext and next (and sometimes remove). For example, a general way to print
all elements of a collection of Strings (analogous to the previous array printer)
might be
for (Iterator<String> i = C.iterator (); i.hasNext (); )
System.out.println (i.next () + " ");
2
The library also denes the interface java.util.Enumeration, which is essentially an older
version of the same idea. We wont talk about that interface here, since the ocial position is that
Iterator is preferred for new programs.
2.1. ITERATORS 25
package java.util;
/** An object that delivers each item in some collection of items
* each of which is a T. */
public interface Iterator <T> {
/** True iff there are more items to deliver. */
boolean hasNext ();
/** Advance THIS to the next item and return it. */
T next ();
/** Remove the last item delivered by next() from the collection
* being iterated over. Optional operation: may throw
* UnsupportedOperationException if removal is not possible. */
void remove ();
}
Figure 2.1: The java.util.Iterator interface.
The programmer who writes this loop neednt know what gyrations the object i
has to go through to produce the requested elements; even a major change in how
C represents its collection requires no modication to the loop.
This particular kind of for loop is so common and useful that in Java 2, version
1.5, it has its own syntactic sugar, known as an enhanced for loop. You can write
for (String i : C)
System.out.println (i + " ");
to get the same eect as the previous for loop. Java will insert the missing pieces,
turning this into
for (Iterator<String> = C.iterator (); .hasNext; ) {
String i = .next ();
System.out.println (i + " ");
}
where is some new variable introduced by the compiler and unused elsewhere
in the program, and whose type is taken from that of C.iterator(). This en-
hanced for loop will work for any object C whose type implements the interface
java.lang.Iterable, dened simply
public interface Iterable<T> {
Iterator<T> iterator ();
}
Thanks to the enhanced for loop, simply by dening an iterator method on a type
you dene, you provide a very convenient way to sequence through any subparts
that objects of that type might contain.
Well, needless to say, having introduced this convenient shorthand for Iterators,
Javas designers were suddenly in the position that iterating through the elements
26 CHAPTER 2. DATA TYPES IN THE ABSTRACT
of an array was much clumsier than iterating through those of a library class. So
they extended the enhanced for statement to encompass arrays. So, for example,
these two methods are equivalent:
/** The sum of the
* elements of A */
int sum (int[] A) {
int S;
S = 0;
for (int x : A) =
S += x;
}
/** The sum of the elements
* of A */
int sum (int[] A) {
int S;
S = 0;
for (int = 0; < A.length; ++)
{
int x = A[];
S += x;
}
}
where is a new variable introduced by the compiler.
2.1.2 The ListIterator Interface
Some collections do have a natural notion of ordering, but it may still be expensive
to extract an arbitrary item from the collection by index. For example, you may
have seen linked lists in the Scheme language: given an item in the list, it requires
n operations to nd the n
th
succeeding item (in contrast to a Java array, which
requires only one Java operation or a few machine operations to retrieve any item).
The standard Java library contains the interface java.util.ListIterator, which
captures the idea of sequencing through an ordered sequence without fetching each
explicitly by number. It is summarized in Figure 2.2. In addition to the nav-
igational methods and the remove method of Iterator (which it extends), the
ListIterator class provides operations for inserting new items or replacing items
in a collection.
2.2 The Java Collection Abstractions
The Java library (beginning with JDK 1.2) provides a hierarchy of interfaces rep-
resenting various kinds of collection, plus a hierarchy of abstract classes to help
programmers provide implementations of these interfaces, as well as a few actual
(concrete) implementations. These classes are all found in the package java.util.
Figure 2.4 illustrates the hierarchy of classes and interfaces devoted to collections.
2.2.1 The Collection Interface
The Java library interface java.util.Collection, whose methods are summarized
in Figures 2.5 and 2.6, is supposed to describe data structures that contain collec-
tions of values, where each value is a reference to some Object (or null). The
term collection as opposed to set is appropriate here, because Collection is
supposed to be able describe multisets (bags) as well as ordinary mathematical sets.
2.2. THE JAVA COLLECTION ABSTRACTIONS 27
package java.util;
/** Abstraction of a position in an ordered collection. At any
* given time, THIS represents a position (called its cursor)
* that is just after some number of items of type T (0 or more) of
* a particular collection, called the underlying collection. */
public interface ListIterator<T> extends Iterator<T> {
/* Exceptions: Methods that return items from the collection throw
* NoSuchElementException if there is no appropriate item. Optional
* methods throw UnsupportedOperationException if the method is not
* supported. */
/* Required methods: */
/** True unless THIS is past the last item of the collection */
boolean hasNext ();
/** True unless THIS is before the first item of the collection */
boolean hasPrevious ();
/** Returns the item immediately after the cursor, and
* moves the current position to just after that item.
* Throws NoSuchElementException if there is no such item. */
T next ();
/** Returns the item immediately before the cursor, and
* moves the current position to just before that item.
* Throws NoSuchElementException if there is no such item. */
T previous ();
/** The number of items before the cursor */
int nextIndex ();
/* nextIndex () - 1 */
int previousIndex ();
Figure 2.2: The java.util.ListIterator interface.
28 CHAPTER 2. DATA TYPES IN THE ABSTRACT
/* Optional methods: */
/** Insert item X into the underlying collection immediately before
* the cursor (X will be returned by previous()). */
void add (T x);
/** Remove the item returned by the most recent call to .next ()
* or .previous (). There must not have been a more recent
* call to .add(). */
void remove ();
/** Replace the item returned by the most recent call to .next ()
* or .previous () with X in the underlying collection.
* There must not have been a more recent call to .add() or .remove. */
void set (T x);
}
Figure 2.2, continued: Optional methods in the ListIterator class.
Map
AbstractMap SortedMap
HashMap WeakHashMap TreeMap
Figure 2.3: The Java librarys Map-related types (from java.util). Ellipses rep-
resent interfaces; dashed boxes are abstract classes, and solid boxes are concrete
(non-abstract) classes. Solid arrows indicate extends relationships, and dashed
arrows indicate implements relationships. The abstract classes are for use by
implementors wishing to add new collection classes; they provide default implemen-
tations of some methods. Clients apply new to the concrete classes to get instances,
and (at least ideally), use the interfaces as formal parameter types so as to make
their methods as widely applicable as possible.
2.2. THE JAVA COLLECTION ABSTRACTIONS 29
Collection
List Set
SortedSet
AbstractCollection
AbstractList AbstractSet
AbstractSequentialList ArrayList Vector
LinkedList Stack
HashSet TreeSet
Figure 2.4: The Java librarys Collection-related types (from java.util). See Fig-
ure 2.3 for the notation.
30 CHAPTER 2. DATA TYPES IN THE ABSTRACT
Since this is an interface, the documentation comments describing the operations
need not be accurate; an inept or mischievous programmer can write a class that
implements Collection in which the add method removes values. Nevertheless,
any decent implementor will honor the comments, so that any method that accepts
a Collection, C, as an argument can expect that, after executing C.add(x), the
value x will be in C.
Not every kind of Collection needs to implement every methodspecically,
not the optional methods in Figure 2.6but may instead choose to raise the stan-
dard exception UnsupportedOperationException. See 2.5 for a further dis-
cussion of this particular design choice. Classes that implement only the required
methods are essentially read-only collections; they cant be modied once they are
created.
The comment concerning constructors in Figure 2.5 is, of course, merely a com-
ment. Java interfaces do not have constructors, since they do not represent specic
types of concrete object. Nevertheless, you ultimately need some constructor to cre-
ate a Collection in the rst place, and the purpose of the comment is to suggest
some useful uniformity.
At this point, you may well be wondering of what possible use the Collection
class might be, inasmuch as it is impossible to create one directly (it is an interface),
and you are missing details about what its members do (for example, can a given
Collection have two equal elements?). The point is that any function that you
can write using just the information provided in the Collection interface will work
for all implementations of Collection.
For example, here is simple method to determine if the elements of one Collection
are a subset of another:
/** True iff C0 is a subset of C1, ignoring repetitions. */
public static boolean subsetOf (Collection<?> C0, Collection<?> C1) {
for (Object i : C0)
if (! C1.contains (i))
return false;
// Note: equivalent to
// for (Iterator<?> iter = C0.iterator(); iter.hasNext (); ) {
// Object i = iter.next ();
// ...
return true;
}
We have no idea what kinds of objects C0 and C1 are (they might be completely
dierent implementations of Collection), in what order their iterators deliver ele-
ments, or whether they allow repetitions. This method relies solely on the properties
described in the interface and its comments, and therefore always works (assuming,
as always, that the programmers who write classes that implement Collection
do their jobs). We dont have to rewrite it for each new kind of Collection we
implement.
2.2. THE JAVA COLLECTION ABSTRACTIONS 31
package java.util;
/** A collection of values, each an Object reference. */
public interface Collection<T> extends Iterable<T> {
/* Constructors. Classes that implement Collection should
* have at least two constructors:
* CLASS (): Constructs an empty CLASS
* CLASS (C): Where C is any Collection, constructs a CLASS that
* contains the same elements as C. */
/* Required methods: */
/** The number of values in THIS. */
int size ();
/** True iff size () == 0. */
boolean isEmpty ();
/** True iff THIS contains X: that is, if for some z in
* THIS, either z and X are null, or z.equals (X). */
boolean contains (Object x);
/** True iff contains(x) for all elements x in C. */
boolean containsAll (Collection<?> c);
/** An iterator that yields all the elements of THIS, in some
* order. */
Iterator<T> iterator ();
/** A new array containing all elements of THIS. */
Object[] toArray ();
/** Assuming ANARRAY has dynamic type T[] (where T is some
* reference type), the result is an array of type T[] containing
* all elements of THIS. The result is ANARRAY itself, if all of
* these elements fit (leftover elements of ANARRAY are set to null).
* Otherwise, the result is a new array. It is an error if not
* all items in THIS are assignable to T. */
<T> T[] toArray (T[] anArray);
Figure 2.5: The interface java.util.Collection, required members.
32 CHAPTER 2. DATA TYPES IN THE ABSTRACT
// Interface java.util.Collection, continued.
/* Optional methods. Any of these may do nothing except to
* throw UnsupportedOperationException. */
/** Cause X to be contained in THIS. Returns true if the Collection */
* changes as a result. */
boolean add (T x);
/** Cause all members of C to be contained in THIS. Returns true
* if the object THIS changes as a result. */
boolean addAll (Collection<? extends T> c);
/** Remove all members of THIS. */
void clear ();
/** Remove a Object .equal to X from THIS, if one exists,
* returning true iff the object THIS changes as a result. */
boolean remove (Object X);
/** Remove all elements, x, such that C.contains(x) (if any
* are present), returning true iff there were any
* objects removed. */
boolean removeAll (Collection<?> c);
/** Intersection: Remove all elements, x, such that C.contains(x)
* is false, returning true iff any items were removed. */
boolean retainAll (Collection<?> c);
}
Figure 2.6: Optional members of the interface java.util.Collection
2.2. THE JAVA COLLECTION ABSTRACTIONS 33
2.2.2 The Set Interface
In mathematics, a set is a collection of values in which there are no duplicates. This
is the idea also for the interface java.util.Set. Unfortunately, this provision is
not directly expressible in the form of a Java interface. In fact, as far as the Java
compiler is concerned, the following serves as a perfectly good denition:
package java.util;
public interface Set<T> extends Collection<T> { }
The methods, that is, are all the same. The dierences are all in the comments.
The one-copy-of-each-element rule is reected in more specic comments on several
methods. The result is shown in Figure 2.7. In this denition, we also include the
methods equals and hashCode. These methods are automatically part of any inter-
face, because they are dened in the Java class java.lang.Object, but I included
them here because their semantic specication (the comment) is more stringent than
for the general Object. The idea, of course, is for equals to denote set equality.
Well return to hashCode in Chapter 7.
2.2.3 The List Interface
As the term is used in the Java libraries, a list is a sequence of items, possibly with
repetitions. That is, it is a specialized kind of Collection, one in which there is a
sequence to the elementsa rst item, a last item, an n
th
itemand items may be
repeated (it cant be considered a Set). As a result, it makes sense to extend the
interface (relative to Collection) to include additional methods that make sense
for well-ordered sequences. Figure 2.8 displays the interface.
A great deal of functionality here is wrapped up in the listIterator method
and the object it returns. As you can see from the interface descriptions, you can
insert, add, remove, or sequence through items in a List either by using methods
in the List interface itself, or by using listIterator to create a list iterator with
which you can do the same. The idea is that using the listIterator to process
an entire list (or some part of it) will generally be faster than using get and other
methods of List that use numeric indices to denote items of interest.
Views
The subList method is particularly interesting. A call such as L.subList(i,j) is
supposed to produce another List (which will generally not be of the same type as
L) consisting of the i
th
through the (j-1)
th
items of L. Furthermore, it is to do
this by providing a view of this part of Lthat is, an alternative way of accessing
the same data containers. The idea is that modifying the sublist (using methods
such as add, remove, and set) is supposed to modify the corresponding portion of
L as well. For example, to remove all but the rst k items in list L, you might write
L.subList (k, L.size ()).clear ();
34 CHAPTER 2. DATA TYPES IN THE ABSTRACT
package java.util;
/** A Collection that contains at most one null item and in which no
* two distinct non-null items are .equal. The effects of modifying
* an item contained in a Set so as to change the value of .equal
* on it are undefined. */
public interface Set<T> extends Collection<T> {
/* Constructors. Classes that implement Set should
* have at least two constructors:
* CLASS (): Constructs an empty CLASS
* CLASS (C): Where C is any Collection, constructs a CLASS that
* contains the same elements as C, with duplicates removed. */
/** Cause X to be contained in THIS. Returns true iff X was */
* not previously a member. */
boolean add (T x);
/** True iff S is a Set (instanceof Set) and is equal to THIS as a
* set (size()==S.size() each of item in S is contained in THIS). */
boolean equals (Object S);
/** The sum of the values of x.hashCode () for all x in THIS, with
* the hashCode of null taken to be 0. */
int hashCode ();
/* Other methods inherited from Collection:
* size, isEmpty, contains, containsAll, iterator, toArray,
* addAll, clear, remove, removeAll, retainAll */
}
Figure 2.7: The interface java.util.Set. Only methods with comments that are
more specic than those of Collection are shown.
2.2. THE JAVA COLLECTION ABSTRACTIONS 35
package java.util;
/** An ordered sequence of items, indexed by numbers 0 .. N-1,
* where N is the size() of the List. */
public interface List<T> extends Collection<T> {
/* Required methods: */
/** The Kth element of THIS, where 0 <= K < size(). Throws
* IndexOutOfBoundsException if K is out of range. */
T get (int k);
/** The first value k such that get(k) is null if X==null,
* X.equals (get(k)), otherwise, or -1 if there is no such k. */
int indexOf (Object x);
/** The largest value k such that get(k) is null if X==null,
* X.equals (get(k)), otherwise, or -1 if there is no such k. */
int lastIndexOf (Object x);
/* NOTE: The methods iterator, listIterator, and subList produce
* views that become invalid if THIS is structurally modified by
* any other means (see text). */
/** An iterator that yields all the elements of THIS, in proper
* index order. (NOTE: it is always valid for iterator() to
* return the same value as would listIterator, below.) */
Iterator<T> iterator ();
/** A ListIterator that yields the elements K, K+1, ..., size()-1
* of THIS, in that order, where 0 <= K <= size(). Throws
* IndexOutOfBoundsException if K is out of range. */
ListIterator<T> listIterator (int k);
/** Same as listIterator (0) */
ListIterator<T> listIterator ();
/** A view of THIS consisting of the elements L, L+1, ..., U-1,
* in that order. Throws IndexOutOfBoundsException unless
* 0 <= L <= U <= size(). */
List<T> subList (int L, int U);
/* Other methods inherited from Collection:
* add, addAll, size, isEmpty, contains, containsAll, remove, toArray */
Figure 2.8: Required methods of interface java.util.List, beyond those inherited
from Collection.
36 CHAPTER 2. DATA TYPES IN THE ABSTRACT
/* Optional methods: */
/** Cause item K of THIS to be X, and items K+1, K+2, ... to contain
* the previous values of get(K), get(K+1), .... Throws
* IndexOutOfBoundsException unless 0<=K<=size(). */
void add (int k, T x);
/** Same effect as add (size (), x); always returns true. */
boolean add (T x);
/** If the elements returned by C.iterator () are x0, x1,..., in
* that order, then perform the equivalent of add(K,x0),
* add(K+1,x1), ..., returning true iff there was anything to
* insert. IndexOutOfBoundsException unless 0<=K<=size(). */
boolean addAll (int k, Collection<T> c);
/** Same as addAll(size(), c). */
boolean addAll (Collection<T> c);
/** Remove item K, moving items K+1, ... down one index position,
* and returning the removed item. Throws
* IndexOutOfBoundsException if there is no item K. */
Object remove (int k);
/** Remove the first item equal to X, if any, moving subsequent
* elements one index position lower. Return true iff anything
* was removed. */
boolean remove (Object x);
/** Replace get(K) with X, returning the initial (replaced) value of
* get(K). Throws IndexOutOfBoundsException if there is no item K. */
Object set (int k, T x);
/* Other methods inherited from Collection: removeAll, retainAll */
}
Figure 2.8, continued: Optional methods of interface java.util.List, beyond
from those inherited from Collection.
2.2. THE JAVA COLLECTION ABSTRACTIONS 37
As a result, there are a lot of possible operations on List that dont have to be
dened, because they fall out as a natural consequence of operations on sublists.
There is no need for a version of remove that deletes items i through j of a list, or
for a version of indexOf that starts searching at item k.
Iterators (including ListIterators) provide another example of a view of Col-
lections. Again, you can access or (sometimes) modify the current contents of a
Collection through an iterator that its methods supply. For that matter, any Col-
lection is itself a viewthe identity view if you want.
Whenever there are two possible views of the same entity, there is a possibility
that using one of them to modify the entity will interfere with the other view. Its
not just that changes in one view are supposed to be seen in other views (as in the
example of clearing a sublist, above), but straightforward and fast implementations
of some views may malfunction when the entity being viewed is changed by other
means. What is supposed to happen when you call remove on an iterator, but the
item that is supposed to be removed (according to the specication of Iterator)
has already been removed directly (by calling remove on the full Collection)? Or
suppose you have a sublist containing items 2 through 4 of some full list. If the full
list is cleared, and then 3 items are added to it, what is in the sublist view?
Because of these quandries, the full specication of many view-producing meth-
ods (in the List interface, these are iterator, listIterator, and subList) have
a provision that the view becomes invalid if the underlying List is structurally mod-
ied (that is, if items are added or removed) through some means other than that
view. Thus, the result of L.iterator() becomes invalid if you perform L.add(...),
or if you perform remove on some other Iterator or sublist produced from L. By
contrast, we will also encounter views, such as those produced by the values method
on Map (see Figure 2.12), that are supposed to remain valid even when the under-
lying object is structurally modied; it is an obligation on the implementors of new
kinds of Map that they see that this is so.
2.2.4 Ordered Sets
The List interface describes data types that describe sequences in which the pro-
grammer explicitly determines the order of items in the sequence by the order or
place in which they are added to the sequence. By contrast, the SortedSet in-
terface is intended to describe sequences in which the data determine the ordering
according to some selected relation. Of course, this immediately raises a question:
in Java, how do we represent this selected relation so that we can specify it? How
do we make an ordering relation a parameter?
Orderings: the Comparable and Comparator Interfaces
There are various ways for functions to dene an ordering over some set of objects.
One way is to dene boolean operations equals, less, greater, etc., with the
obvious meanings. Libraries in the C family of languages (which includes Java)
tend to combine all of these into a single function that returns an integer whose
sign denotes the relation. For example, on the type String, x.compareTo("cat")
38 CHAPTER 2. DATA TYPES IN THE ABSTRACT
package java.lang;
/** Describes types that have a natural ordering. */
public interface Comparable<T> {
/** Returns
* * a negative value iff THIS < Y under the natural ordering
* * a positive value iff THIS > Y;
* * 0 iff X and Y are "equivalent".
* Throws ClassCastException if X and Y are incomparable. */
int compareTo (T y);
}
Figure 2.9: The interface java.lang.Comparable, which marks classes that dene
a natural ordering.
returns an integer that is zero, negative, or positive, depending on whether x equals
"cat", comes before it in lexicographic order, or comes after it. Thus, the ordering
x y on Strings corresponds to the condition x.compareTo(y)<=0.
For the purposes of the SortedSet interface, this (or ) ordering represented
by compareTo (or compare, described below) is intended to be a total ordering.
That is, it is supposed to be transitive (x y and y z implies x z), reexive
(x x), and antisymmetric (x y and y x implies that x equals y). Also, for all
x and y in the functions domain, either x y or y x.
Some classes (such as String) dene their own standard comparison operation.
The standard way to do so is to implement the Comparable interface, shown in
Figure 2.9. However, not all classes have such an ordering, nor is the natural
ordering necessarily what you want in any given case. For example, one can sort
Strings in dictionary order, reverse dictionary order, or case-insensitive order.
In the Scheme language, there is no particular problem: an ordering relation is
just a function, and functions are perfectly good values in Scheme. To a certain
extent, the same is true in languages like C and Fortran, where functions can be
used as arguments to subprograms, but unlike Scheme, have access only to global
variables (what are called static elds or class variables in Java). Java does not di-
rectly support functions as values, but it turns out that this is not a limitation. The
Java standard library denes the Comparator interface (Figure 2.10) to represent
things that may be used as ordering relations.
The methods provided by both of these interfaces are supposed to be proper to-
tal orderings. However, as usual, none of the conditions can actually be enforced by
the Java language; they are just conventions imposed by comment. The program-
mer who violates these assumptions may cause all kinds of unexpected behavior.
Likewise, nothing can keep you from dening a compare operation that is inconsis-
tent with the .equals function. We say that compare (or compareTo) is consistent
with equals if x.equals(y) i C.compare(x,y)==0. Its generally good practice to
maintain this consistency in the absence of a good reason to the contrary.
2.3. THE JAVA MAP ABSTRACTIONS 39
package java.util;
/** An ordering relation on certain pairs of objects. If */
public interface Comparator<T> {
/** Returns
* * a negative value iff X < Y according to THIS ordering;
* * a positive value iff X > Y;
* * 0 iff X and Y are "equivalent" under the order;
* Throws ClassCastException if X and Y are incomparable.
*/
int compare (T x, T y);
/** True if ORD is "same" ordering as THIS. It is legal to return
* false (conservatively) even if ORD does define the same ordering,
* but should return true only if ORD.compare (X, Y) and
* THIS.compare(X, Y) always have the same value. */
boolean equals (Object ord);
}
Figure 2.10: The interface java.util.Comparator, which represents ordering rela-
tions between Objects.
The SortedSet Interface
The SortedSet interface shown in Figure 2.11 extends the Set interface so that
its iterator method delivers an Iterator that sequences through its contents in
order. It also provides additional methods that make sense only when there is
such an order. There are intended to be two ways to dene this ordering: either the
programmer supplies a Comparator when constructing a SortedSet that denes
the order, or else the contents of the set must Comparable, and their natural order
is used.
2.3 The Java Map Abstractions
The term map or mapping is used in computer science and elsewhere as a synonym
for function in the mathematical sensea correspondence between items in some
set (the domain) and another set (the codomain) in which each item of the domain
corresponds to (is mapped to by) a single item of the codomain
3
.
It is typical among programmers to take a rather operational view, and say
that a map-like data structure looks up a given key (domain value) to nd the
associated value (codomain value). However, from a mathematical point of view, a
perfectly good interpretation is that a mapping is a set of pairs, (d, c), where d is a
3
Any number of members of the domain, including zero, may correspond to a given member of
the codomain. The subset of the codomain that is mapped to by some member of the domain is
called the range of the mapping, or the image of the domain under the mapping.
40 CHAPTER 2. DATA TYPES IN THE ABSTRACT
package java.util;
public interface SortedSet<T> extends Set<T> {
/* Constructors. Classes that implement SortedSet should define
* at least the constructors
* CLASS (): An empty set ordered by natural order (compareTo).
* CLASS (CMP): An empty set ordered by the Comparator CMP.
* CLASS (C): A set containing the items in Collection C, in
* natural order.
* CLASS (S): A set containing a copy of SortedSet S, with the
* same order.
*/
/** The comparator used by THIS, or null if natural ordering used. */
Comparator<? super T> comparator ();
/** The first (smallest) item in THIS according to its ordering */
T first ();
/** The last (largest) item in THIS according to its ordering */
T last ();
/* NOTE: The methods headSet, tailSet, and subSet produce
* views that become invalid if THIS is structurally modified by
* any other means. */
/** A view of all items in THIS that are strictly less than X. */
SortedSet<T> headSet (T x);
/** A view of all items in THIS that are strictly >= X. */
SortedSet<T> tailSet (T x);
/** A view of all items, y, in THIS such that X0 <= y < X1. */
SortedSet<T> subSet (T X0, T X1);
}
Figure 2.11: The interface java.util.SortedSet.
2.4. AN EXAMPLE 41
member of the domain, and c of the codomain.
2.3.1 The Map Interface
The standard Java library uses the java.util.Map interface, displayed in Fig-
ures 2.12 and 2.13, to capture these notions of mapping. This interface provides
both the view of a map as a look-up operation (with the method get), but also the
view of a map as a set of ordered pairs (with the method entrySet). This in turn re-
quires some representation for ordered pair, provided here by the nested interface
Map.Entry. A programmer who wishes to introduce a new kind of map therefore
denes not only a concrete class to implement the Map interface, but another one
to implement Map.Entry.
2.3.2 The SortedMap Interface
An object that implements java.util.SortedMap is supposed to be a Map in which
the set of keys is ordered. As you might expect, the operations are analogous to
those of the interface SortedSet, as shown in Figure 2.15.
2.4 An Example
Consider the problem of reading in a sequence of pairs of names, (n
i
, m
i
). We wish
to create a list of all the rst members, n
i
, in alphabetical order, and, for each of
them, a list of all names m
i
that are paired with them, with each m
i
appearing
once, and listed in the order of rst appearance. Thus, the input
John Mary George Jeff Tom Bert George Paul John Peter
Tom Jim George Paul Ann Cyril John Mary George Eric
might produce the output
Ann: Cyril
George: Jeff Paul Eric
John: Mary Peter
Tom: Bert Jim
We can use some kind of SortedMap to handle the n
i
and for each, a List to handle
the m
i
. A possible method (taking a Reader as a source of input and a PrintWriter
as a destination for output) is shown in Figure 2.16.
42 CHAPTER 2. DATA TYPES IN THE ABSTRACT
package java.util;
public interface Map<Key, Val> {
/* Constructors: Classes that implement Map should
* have at least two constructors:
* CLASS (): Constructs an empty CLASS
* CLASS (M): Where M is any Map, constructs a CLASS that
* denotes the same abstract mapping as C. */
/* Required methods: */
/** The number of keys in the domain of THIS map. */
int size ();
/** True iff size () == 0 */
boolean isEmpty ();
/* NOTE: The methods keySet, values, and entrySet produce views
* that remain valid even if THIS is structurally modified. */
/** The domain of THIS. */
Set<Key> keySet ();
/** The range of THIS. */
Collection<Val> values ();
/** A view of THIS as the set of all its (key,value) pairs. */
Set<Map.Entry<Key, Val>> entrySet ();
/** The value mapped to by KEY, or null if KEY is not
* in the domain of THIS. */
/** True iff keySet().contains (KEY) */
boolean containsKey (Object key);
/** True iff values().contains (VAL). */
boolean containsValue (Object val);
Object get (Object key);
/** True iff M is a Map and THIS and M represent the same mapping. */
boolean equals (Object M);
/** The sum of the hashCode values of all members of entrySet(). */
int hashCode ();
static interface Entry { ... // See Figure 2.14 }
Figure 2.12: Required methods of the interface java.util.Map.
2.4. AN EXAMPLE 43
// Interface java.util.Map, continued
/* Optional methods: */
/** Set the domain of THIS to the empty set. */
void clear();
/** Cause get(KEY) to yield VAL, without disturbing other values. */
Object put(Key key, Val val);
/** Add all members of M.entrySet() to the entrySet() of THIS. */
void putAll(Map<? extends Key, ? extends Val> M);
/** Remove KEY from the domain of THIS. */
Object remove(Object key);
}
Figure 2.13: Optional methods of the interface java.util.Map.
/** Represents a (key,value) pair from some Map. In general, an Entry
* is associated with a particular underlying Map value. Operations that
* change the Entry (specifically setValue) are reflected in that
* Map. Once an entry has been removed from a Map as a result of
* remove or clear, further operations on it may fail. */
static interface Entry<Key,Val> {
/** The key part of THIS. */
Key getKey();
/** The value part of THIS. */
Val getValue();
/** Cause getValue() to become VAL, returning the previous value. */
Val setValue(Val val);
/** True iff E is a Map.Entry and both represent the same (key,value)
* pair (i.e., keys are both null, or are .equal, and likewise for
* values).
boolean equals(Object e);
/** An integer hash value that depends only on the hashCode values
* of getKey() and getValue() according to the formula:
* (getKey() == null ? 0 : getKey().hashCode ())
* ^ (getValue() == null ? 0 : getValue.hashCode ()) */
int hashCode();
}
Figure 2.14: The nested interface java.util.Map.Entry, which is nested within
the java.util.Map interface.
44 CHAPTER 2. DATA TYPES IN THE ABSTRACT
package java.util;
public interface SortedMap<Key,Val> extends Map<Key,Val> {
/* Constructors: Classes that implement SortedMap should
* have at least four constructors:
* CLASS (): An empty map whose keys are ordered by natural order.
* CLASS (CMP): An empty map whose keys are ordered by the Comparator CMP.
* CLASS (M): A map that is a copy of Map M, with keys ordered
* in natural order.
* CLASS (S): A map containing a copy of SortedMap S, with
* keys obeying the same ordering.
*/
/** The comparator used by THIS, or null if natural ordering used. */
Comparator<? super Key> comparator ();
/** The first (smallest) key in the domain of THIS according to
* its ordering */
Key firstKey ();
/** The last (largest) item in the domain of THIS according to
* its ordering */
Key lastKey ();
/* NOTE: The methods headMap, tailMap, and subMap produce views
* that remain valid even if THIS is structurally modified. */
/** A view of THIS consisting of the restriction to all keys in the
* domain that are strictly less than KEY. */
SortedMap<Key,Val> headMap (Key key);
/** A view of THIS consisting of the restriction to all keys in the
* domain that are greater than or equal to KEY. */
SortedMap<Key,Val> tailMap (Key key);
/** A view of THIS restricted to the domain of all keys, y,
* such that KEY0 <= y < KEY1. */
SortedMap<Key,Val> subMap (Key key0, Key key1);
}
Figure 2.15: The interface java.util.SortedMap, showing methods not included
in Map.
2.4. AN EXAMPLE 45
import java.util.*;
import java.io.*;
class Example {
/** Read (n
i
, m
i
) pairs from INP, and summarize all
* pairings for each $n_i$ in order on OUT. */
static void correlate (Reader inp, PrintWriter out)
throws IOException
{
Scanner scn = new Scanner (inp);
SortedMap<String, List<String>> associatesMap
= new TreeMap<String,List<String>> ();
while (scn.hasNext ()) {
String n = scn.next ();
String m = scn.next ();
if (m == null || n == null)
throw new IOException ("bad input format");
List<String> associates = associatesMap.get (n);
if (associates == null) {
associates = new ArrayList<String> ();
associatesMap.put (n, associates);
}
if (! associates.contains (m))
associates.add (m);
}
for (Map.Entry<String, List<String>> e : associatesMap.entrySet ()) {
System.out.format ("%s:", e.getKey ());
for (String s : e.getValue ())
System.out.format (" %s", s);
System.out.println ();
}
}
}
Figure 2.16: An example using SortedMaps and Lists.
46 CHAPTER 2. DATA TYPES IN THE ABSTRACT
2.5 Managing Partial Implementations: Design Options
Throughout the Collection interfaces, you saw (in comments) that certain opera-
tions were optional. Their specications gave the implementor leave to use
throw new UnsupportedOperationException ();
as the body of the operation. This provides an elegant enough way not to implement
something, but it raises an important design issue. Throwing an exception is a
dynamic action. In general, the compiler will have no comment about the fact that
you have written a program that must inevitably throw such an exception; you will
discover only upon testing the program that the implementation you have chosen
for some data structure is not sucient.
An alternative design would split the interfaces into smaller pieces, like this:
public interface ConstantIterator<T> {
Required methods of Iterator
}
public interface Iterator<T> extends ConstantIterator<T> {
void remove ();
}
public interface ConstantCollection<T> {
Required methods of Collection
}
public interface Collection<T> extends ConstantCollection<T> {
Optional methods of Collection
}
public interface ConstantSet<T> extends ConstantCollection<T> {
}
public interface Set<T> extends ConstantSet<T>, Collection<T> {
}
public interface ConstantList<T> extends ConstantCollection<T> {
Required methods of List
}
public interface List<T> extends Collection<T>, ConstantList<T> {
Optional methods of List
}
etc.. . .
2.5. MANAGING PARTIAL IMPLEMENTATIONS: DESIGN OPTIONS 47
With such a design the compiler could catch attempts to call unsupported methods,
so that you wouldnt need testing to discover a gap in your implementation.
However, such a redesign would have its own costs. Its not quite as simple as
the listing above makes it appear. Consider, for example, the subList method in
ConstantList. Presumably, this would most sensibly return a ConstantList, since
if you are not allowed to alter a list, you cannot be allowed to alter one of its views.
That means, however, that the type List would need two subList methods (with
diering names), the one inherited from ConstantList, and a new one that produces
a List as its result, which would allow modication. Similar considerations apply
to the results of the iterator method; there would have to be twoone to return a
ConstantIterator, and the other to return Iterator. Furthermore, this proposed
redesign would not deal with an implementation of List that allowed one to add
items, or clear all items, but not remove individual items. For that, you would either
still need the UnsupportedOperationException or an even more complicated nest
of classes.
Evidently, the Java designers decided to accept the cost of leaving some problems
to be discovered by testing in order to simplify the design of their library. By
contrast, the designers of the corresponding standard libraries in C++ opted to
distinguish operations that work on any collections from those that work only on
mutable collections. However, they did not design their library out of interfaces;
it is awkward at best to introduce new kinds of collection or map in the C++ library.
48 CHAPTER 2. DATA TYPES IN THE ABSTRACT
Chapter 3
Meeting a Specication
In Chapter 2, we saw and exercised a number of abstract interfacesabstract in the
sense that they describe the common features, the method signatures, of whole fam-
ilies of types without saying anything about the internals of those types and without
providing a way to create any concrete objects that implement those interfaces.
In this chapter, we get a little closer to concrete representations, by showing
one way to ll in the blanks. In one sense, these wont be serious implementations;
they will use naive, rather slow data structures. Our purpose, rather, will be one
of exercising the machinery of object-oriented programming to illustrate ideas that
you can apply elsewhere.
To help implementors who wish to introduce new implementations of the ab-
stract interfaces weve covered, the Java standard library provides a parallel collec-
tion of abstract classes with some methods lled in. Once youve supplied a few
key methods that remain unimplemented, you get all the rest for free. These
partial implementation classes are not intended to be used directly in most ordi-
nary programs, but only as implementation aids for library writers. Here is a list
of these classes and the interfaces they partially implement (all from the package
java.util):
Abstract Class Interfaces
AbstractCollection Collection
AbstractSet Collection, Set
AbstractList Collection, List
AbstractSequentialList Collection, List
AbstractMap Map
The idea of using partial implementations in this way is an instance of a design
pattern called Template Method. The term design pattern in the context of object-
oriented programming has come to mean the core of a solution to a particular
commonly occurring problem in program design
1
. The Abstract... classes are
1
The seminal work on the topic is the excellent book by E. Gamma, R. Helm, R. Johnson, and J.
Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, 1995.
This group and their book are often referred to as The Gang of Four.
49
50 CHAPTER 3. MEETING A SPECIFICATION
import java.util.*;
import java.lang.reflect.Array;
public class ArrayCollection<T> implements Collection<T> {
private T[] data;
/** An empty Collection */
public ArrayCollection () { data = (T[]) new Object[0]; }
/** A Collection consisting of the elements of C */
public ArrayCollection (Collection<? extends T> C) {
data = C.toArray((T[]) new Object[C.size ()]);
}
/** A Collection consisting of a view of the elements of A. */
public ArrayCollection (T[] A) { data = T; }
public int size () { return data.length; }
public Iterator<T> iterator () {
return new Iterator<T> () {
private int k = 0;
public boolean hasNext () { return k < size (); }
public T next () {
if (! hasNext ()) throw new NoSuchElementException ();
k += 1;
return data[k-1];
}
public void remove () {
throw new UnsupportedOperationException ();
}
};
}
public boolean isEmpty () { return size () == 0; }
public boolean contains (Object x) {
for (T y : this) {
if (x == null && y == null
|| x != null && x.equals (y))
return true;
}
return false;
}
Figure 3.1: Implementation of a new kind of read-only Collection from scratch.
51
public boolean containsAll (Collection<?> c) {
for (Object x : c)
if (! contains (x))
return false;
return true;
}
public Object[] toArray () { return toArray (new Object[size ()]); }
public <E> E[] toArray (E[] anArray) {
if (anArray.length < size ()) {
Class<?> typeOfElement = anArray.getClass ().getComponentType ();
anArray = (E[]) Array.newInstance (typeOfElement, size ());
}
System.arraycopy (anArray, 0, data, 0, size ());
return anArray;
}
private boolean UNSUPPORTED () {
throw new UnsupportedOperationException ();
}
public boolean add (T x) { return UNSUPPORTED (); }
public boolean addAll (Collection<? extends T> c) { return UNSUPPORTED (); }
public void clear () { UNSUPPORTED (); }
public boolean remove (Object x) { return UNSUPPORTED (); }
public boolean removeAll (Collection<?> c) { return UNSUPPORTED (); }
public boolean retainAll (Collection<?> c) { return UNSUPPORTED (); }
}
Figure 3.1, continued: Since this is a read-only collection, the methods for modify-
ing the collection all throw UnsupportedOperationException, the standard way
to signal unsupported features.
52 CHAPTER 3. MEETING A SPECIFICATION
used as templates for real implementations. Using method overriding, the imple-
mentor lls in a few methods; everything else in the template uses those methods
2
.
In the sections to follow, well look at how these classes are used and well look
at some of their internals for ideas about how to use some of the features of Java
classes. But rst, lets have a quick look at the alternative.
3.1 Doing it from Scratch
For comparison, lets suppose we wanted to introduce a simple implementation that
simply allowed us to treat an ordinary array of Objects as a read-only Collection.
The direct way to do so is shown in Figure 3.1. Following the specication of
Collection, the rst two constructors for ArrayCollection provide ways of form-
ing an empty collection (not terribly useful, of course, since you cant add to it) and a
copy of an existing collection. The third constructor is specic to the new class, and
provides a view of an array as a Collectionthat is, the items in the Collection
are the elements of the array, and the operations are those of the Collection inter-
face. Next come the required methods. The Iterator that is returned by iterator
has an anonymous type; no user of ArrayCollection can create an object of this
type directly. Since this is a read-only collection, the optional methods (which
modify collections) are all unsupported.
A Side Excursion on Reection. The implementation of the second toArray
method is rather interesting, in that it uses a fairly exotic feature of the Java lan-
guage: reection. This term refers to language features that allow one to manipulate
constructs of a programming language within the language itself. In English, we
employ reection when we say something like The word hit is a verb. The spec-
ication of toArray calls for us to produce an array of the same dynamic type as
the argument. To do so, we rst use the method getClass, which is dened on
all Objects, to get a value of the built-in type java.lang.Class that stands for
(reects) the dynamic type of the anArray argument. One of the operations on
type Class is getComponentType, which, for an array type, fetches the Class that
reects the type of its elements. Finally, the newInstance method (dened in the
class java.lang.reflect.Array) creates a new array object, given its size and the
Class for its component type.
3.2 The AbstractCollection Class
The implementation of ArrayCollection has an interesting feature: the methods
starting with isEmpty make no mention of the private data of ArrayCollection,
2
While the name Template Method may be appropriate for this design pattern, I must admit that
it has some unfortunate clashes with other uses of the terminology. First, the library denes whole
classes, while the name of the pattern focuses on individual methods within that class. Second, the
term template has another meaning within object-oriented programming; in C++ (and apparently
in upcoming revisions of Java), it refers to a particular language construct.
3.3. IMPLEMENTING THE LIST INTERFACE 53
but instead rely entirely on the other (public) methods. As a result, they could be
employed verbatim in the implementation of any Collection class. The standard
Java library class AbstractCollection exploits this observation (see Figure 3.2).
It is a partially implemented abstract class that new kinds of Collection can
extend. At a bare minimum, an implementor can override just the denitions of
iterator and size to get a read-only collection class. For example, Figure 3.3
shows an easier re-write of ArrayCollection. If, in addition, the programmer
overrides the add method, then AbstractCollection will automatically provide
addAll as well. Finally, if the iterator method returns an Iterator that supports
the remove method, then AbstractCollection will automatically provide clear,
remove, removeAll, and retainAll.
In programs, the idea is to use AbstractCollection only in an extends clause.
That is, it is simply a utility class for the benet of implementors creating new
kinds of Collection, and should not generally be used to specify the type of a
formal parameter, local variable, or eld. This, by the way, is the explanation for
declaring the constructor for AbstractCollection to be protected; that keyword
emphasizes the fact that only extensions of AbstractClass will call it.
Youve already seen ve examples of how AbstractCollection might work in
Figure 3.1: methods isEmpty, contains, containsAll, and the two toArray meth-
ods. Once you get the general idea, it is fairly easy to produce such method bodies
The exercises ask you to produce a few more.
3.3 Implementing the List Interface
The abstract classes AbstractList and AbstractSequentialList are specialized
extensions of the class AbstractCollection provided by the Java standard library
to help dene classes that implement the List interface. Which you choose de-
pends on the nature of the representation used for the concrete list type being
implemented.
3.3.1 The AbstractList Class
The abstract implementation of List, AbstractList, sketched in Figure 3.4 is
intended for representations that provide fast (generally constant time) random
access to their elementsthat is, representations with a fast implementation of get
and (if supplied) remove Figure 3.5 shows how listIterator works, as a partial
illustration. There are a number of interesting techniques illustrated by this class.
Protected methods. The method removeRange is not part of the public inter-
face. Since it is declared protected, it may only be called within other classes
in the package java.util, and within the bodies of extensions of AbstractList.
Such methods are implementation utilities for use in the class and its extensions.
In the standard implementation of AbstractList, removeRange is used to im-
plement clear (which might not sound too important until you remember that
L.subList(k0,k1).clear() is how one removes an arbitrary section of a List).
54 CHAPTER 3. MEETING A SPECIFICATION
package java.util;
public abstract class AbstractCollection<T> implements Collection<T> {
/** The empty Collection. */
protected AbstractCollection<T> () { }
/** Unimplemented methods that must be overridden in any
* non-abstract class that extends AbstractCollection */
/** The number of values in THIS. */
public abstract int size ();
/** An iterator that yields all the elements of THIS, in some
* order. If the remove operation is supported on this iterator,
* then remove, removeAll, clear, and retainAll on THIS will work. */
public abstract Iterator<T> iterator ();
/** Override this default implementation to support adding */
public boolean add (T x) {
throw new UnsupportedOperationException ();
}
Default, general-purpose implementations of
contains (Object x), containsAll (Collection c), isEmpty (),
toArray (), toArray (Object[] A),
addAll (Collection c), clear (), remove (Object x),
removeAll (Collection c), and retainAll (Collection c)
/** A String representing THIS, consisting of a comma-separated
* list of the values in THIS, as returned by its iterator,
* surrounded by square brackets ([]). The elements are
* converted to Strings by String.valueOf (which returns "null"
* for the null pointer and otherwise calls the .toString() method). */
public String toString () { ... }
}
Figure 3.2: The abstract class java.util.AbstractCollection, which may be
used to help implement new kinds of Collection. All the methods behave as
specied in the specication of Collection. Implementors must ll in denitions
of iterator and size, and may either override the other methods, or simply use
their default implementations (not shown here).
3.3. IMPLEMENTING THE LIST INTERFACE 55
import java.util.*;
/** A read-only Collection whose elements are those of an array. */
public class ArrayCollection<T> extends AbstractCollection<T> {
private T[] data;
/** An empty Collection */
public ArrayCollection () {
data = (T[]) new Object[0];
}
/** A Collection consisting of the elements of C */
public ArrayCollection (Collection<? extends T> C) {
data = C.toArray(new Object[C.size ()]);
}
/** A Collection consisting of a view of the elements of A. */
public ArrayCollection (Object[] A) {
data = A;
}
public int size () { return data.length; }
public Iterator<T> iterator () {
return new Iterator<T> () {
private int k = 0;
public boolean hasNext () { return k < size (); }
public T next () {
if (! hasNext ()) throw new NoSuchElementException ();
k += 1;
return data[k-1];
}
public void remove () {
throw new UnsupportedOperationException ();
}
};
}
Figure 3.3: Re-implementation of ArrayCollection, using the default implemen-
tations from java.util.AbstractCollection.
56 CHAPTER 3. MEETING A SPECIFICATION
The default implementation of removeRange simply calls remove(k) repeatedly and
so is not particularly fast. But if a particular List representation allows some bet-
ter strategy, then the programmer can override removeRange, getting better per-
formance for clear (thats why the default implementaion of the method is not
declared nal, even though it is written to work for any representation of List).
Checking for Invalidity. As we discussed in 2.2.3, the iterator, listIterator,
and subList methods of the List interface produce views of a list that become
invalid if the list is structurally changed. Implementors of List are under no
particular obligation to do anything sensible for the programmer who ignores this
provision; using an invalidated view may produce unpredictable results or throw an
unexpected exception, as convenient. Nevertheless, the AbstractList class goes to
some trouble to provide a way to explicitly check for this error, and immediately
throw a specic exception, ConcurrentModificationException, if it happens. The
eld modCount (declared protected to indicate it is intended for List implementors,
not users) keeps track of the number of structural modications to an AbstractList.
Every call to add or remove (either on the List directly or through a view) is sup-
posed to increment it. Individual views can then keep track of the last value they
saw for the modCount eld of their underlying List and throw an exception if it
seems to have changed in the interim. Well see an example in Figure 3.5.
Helper Classes. The subList method of AbstractList (at least in Suns im-
plementation) uses a non-public utility type java.util.SubList to produce its
result. Because it is not public, java.util.SubList is in eect private to the
java.util package, and is not an ocial part of the services provided by that
package. However, being in the same package, it is allowed to access the non-public
elds (modCount) and utility methods (removeRange) of AbstractList. This is an
example of Javas mechanism for allowing trusted classes (those in the same pack-
age) access to the internals of a class while excluding access from other untrusted
classes.
3.3.2 The AbstractSequentialList Class
The second abstract implementation of List, AbstractSequentialList(Figure 3.6),
is intended for use with representations where random access is relatively slow, but
the next operation of the list iterator is still fast.
The reason for having a distinct class for this case becomes clear when you
consider the implementations of get and the next methods of the iterators. If we
assume a fast get method, then it is easy to implement the iterators to have fast
next methods, as was shown in Figure 3.5. If get is slowspecically, if the only
way to retrieve item k of the list is to sequence through the preceding k itemsthen
implementing next as in that gure would be disasterous; it would require (N
2
)
operations to iterate through an N-element list. So using get to implement the
iterators is not always a good idea.
3.3. IMPLEMENTING THE LIST INTERFACE 57
package java.util;
public abstract class AbstractList<T>
extends AbstractCollection<T> implements List<T> {
/** Construct an empty list. */
protected AbstractList () { modCount = 0; }
abstract T get (int index);
abstract int size ();
T set (int k, T x) { return UNSUPPORTED (); }
void add (int k, T x) { UNSUPPORTED (); }
T remove (int k) { return UNSUPPORTED (); }
Default, general-purpose implementations of
add (x), addAll, clear, equals, hashCode, indexOf, iterator,
lastIndexOf, listIterator, set, and subList
/** The number of times THIS has had elements added or removed. */
protected int modCount;
/** Remove from THIS all elements with indices in the
range K0 .. K1-1. */
protected void removeRange (int k0, int k1) {
ListIterator<T> i = listIterator (k0);
for (int k = k0; k < k1 && i.hasNext (); k += 1) {
i.next (); i.remove ();
}
}
private Object UNSUPPORTED ()
{ throw new UnsupportedOperationException (); }
}
Figure 3.4: The abstract class AbstractList, used as an implementation aid in
writing implementations of List that are intended for random access. See Figure 3.5
for the inner class ListIteratorImpl.
58 CHAPTER 3. MEETING A SPECIFICATION
public ListIterator<T> listIterator (int k0) {
return new ListIteratorImpl (k0);
}
private class ListIteratorImpl<T> implements ListIterator<T> {
ListIteratorImpl (int k0)
{ lastMod = modCount; k = k0; lastIndex = -1; }
public boolean hasNext () { return k < size (); }
public hasPrevious () { return k > 0; }
public T next () {
check (0, size ());
lastIndex = k; k += 1; return get (lastIndex);
}
public T previous () {
check (1, size ()+1);
k -= 1; lastIndex = k; return get (k);
}
public int nextIndex () { return k; }
public int previousIndex () { return k-1; }
public void add (T x) {
check (); lastIndex = -1;
k += 1; AbstractList.this.add (k-1, x);
lastMod = modCount;
}
public void remove () {
checkLast (); AbstractList.this.remove (lastIndex);
lastIndex = -1; lastMod = modCount;
}
public void set (T x) {
checkLast (); AbstractList.this.remove (lastIndex, x);
lastIndex = -1; lastMod = modCount;
}
Figure 3.5: Part of a possible implementation of AbstractList, showing the inner
class providing the value of listIterator.
3.3. IMPLEMENTING THE LIST INTERFACE 59
// Class AbstractList.ListIteratorImpl, continued.
/* Private definitions */
/** modCount value expected for underlying list. */
private int lastMod;
/** Current position. */
private int k;
/** Index of last result returned by next or previous. */
private int lastIndex;
/** Check that there has been no concurrent modification. Throws
* appropriate exception if there has. */
private void check () {
if (modCount != lastMod) throw new ConcurrentModificationException();
}
/** Check that there has been no concurrent modification and that
* the current position, k, is in the range K0 <= k < K1. Throws
* appropriate exception if either test fails. */
private void check (int k0, int k1) {
check ();
if (k < k0 || k >= k1)
throw new NoSuchElementException ();
}
/** Check that there has been no concurrent modification and that
* there is a valid last element returned by next or previous.
* Throws appropriate exception if either test fails. */
private checkLast () {
check ();
if (lastIndex == -1) throw new IllegalStateException ();
}
Figure 3.5, continued: Private representation of the ListIterator.
60 CHAPTER 3. MEETING A SPECIFICATION
public abstract class AbstractSequentialList<T> extends AbstractList<T> {
/** An empty list */
protected AbstractSequentialList () { }
abstract int size ();
abstract ListIterator<T> listIterator (int k);
Default implementations of
add(k,x), addAll(k,c), get, iterator, remove(k), set
From AbstractList, inherited implementations of
add(x), clear, equals, hashCode, indexOf, lastIndexOf,
listIterator(), removeRange, subList
From AbstractCollection, inherited implementations of
addAll(), contains, containsAll, isEmpty, remove(), removeAll,
retainAll, toArray, toString
}
Figure 3.6: The class AbstractSequentialList.
On the other hand, if we were always to implement get(k) by iterating over the
preceding k items (that is, use the Iterators methods to implement get rather
than the reverse), we would obviously lose out on representations where get is fast.
3.4 The AbstractMap Class
The AbstractMap class shown in Figure 3.7 provides a template implementation for
the Map interface. Overriding just the entrySet to provide a read-only Set gives
a read-only Map. Additionally overriding the put method gives an extendable Map,
and implementing the remove method for entrySet().iterator() gives a fully
modiable Map.
3.5 Performance Predictions
At the beginning of Chapter 2, I said that there are typically several implementa-
tions for a given interface. There are several possible reasons one might need more
than one. First, special kinds of stored items, keys, or values might need special
handling, either for speed, or because there are extra operations that make sense
only for these special kinds of things. Second, some particular Collections or
Maps may need a special implementation because they are part of something else,
such as the subList or entrySet views. Third, one implementation may perform
3.5. PERFORMANCE PREDICTIONS 61
package java.util;
public abstract class AbstractMap<Key,Val> implements Map<Key,Val> {
/** An empty map. */
protected AbstractMap () { }
/** A view of THIS as the set of all its (key,value) pairs.
* If the resulting Sets iterator supports remove, then THIS
* map will support the remove and clear operations. */
public abstract Set<Entry<Key,Val>> entrySet ();
/** Cause get(KEY) to yield VAL, without disturbing other values. */
public Val put (Key key, Val val) {
throw new UnsupportedOperationException ();
}
Default implementations of
clear, containsKey, containsValue, equals, get, hashCode,
isEmpty, keySet, putAll, remove, size, values
/** Print a String representation of THIS, in the form
* {KEY0=VALUE0, KEY1=VALUE1, ...}
* where keys and values are converted using String.valueOf(...). */
public String toString () { ... }
}
Figure 3.7: The class AbstractMap.
62 CHAPTER 3. MEETING A SPECIFICATION
better than another in some circumstances, but not in others. Finally, there may be
time-vs.-space tradeos between dierent implementations, and some applications
may have particular need for a compact (space-ecient) representation.
We cant make specic claims about the performance of the Abstract... family
of classes described here because they are templates rather than complete imple-
mentations. However, we can characterize their performance as a function of the
methods that the programmer lls in. Here, lets consider two examples: the im-
plementation templates for the List interface.
AbstractList. The strategy behind AbstractList is to use the methods size,
get(k), add(k,x), set(k,x), and remove(k) supplied by the extending type to
implement everything else. The listIterator method returns a ListIterator
that uses get to implement next and previous, add (on AbstractList) to imple-
ment the iterators add, and remove (on AbstractList) to implement the iterators
remove. The cost of the additional bookkeeping done by the iterator consists of
incrementing or decrementing an integer variable, and is therefore a small constant.
Thus, we can easily relate the costs of the iterator functions directly to those of the
supplied methods, as shown in the following table. To simplify matters, we take the
time costs of the size operation and the equals operation on individual items to
be constant. The values of the plugged-in methods are given names of the form
C

; the size of this (the List) is N, and the size of the other Collection argument
(denoted c, which well assume is the same kind of List, just to be able to say
more) is M.
Costs of AbstractList Implementations
List ListIterator
Method Time as () Method Time as ()
add(k,X) C
a
add C
a
get(k) C
g
remove C
r
remove(k) C
r
next C
g
set C
s
previous C
g
remove(X) C
r
+N C
g
set C
s
indexOf N C
g
hasNext 1
lastIndexOf N C
g
listIterator(k) 1
iterator() 1
subList 1
size 1
isEmpty 1
contains N C
g
containsAll(c) N M C
g
addAll(c) M C
g
+ (N +M) C
a
toArray N C
g
3.5. PERFORMANCE PREDICTIONS 63
AbstractSequentialList. Lets now compare the AbstractList implementation
with AbstractSequentialList, which is intended to be used with representations
that dont have cheap get operations, but still do have cheap iterators. In this case,
the get(k) operation is implemented by creating a ListIterator and performing
a next operation on it k times. We get the following table:
Costs of AbstractList Implementations
List ListIterator
Method Time as () Method Time as ()
add(k,X) C
a
+k C
n
add C
a
get(k) k C
n
remove C
r
remove(k) C
r
+k C
n
next C
n
set(k,X) C
s
+k C
n
previous C
p
remove(X) C
r
+N C
g
set C
s
indexOf N C
n
hasNext 1
lastIndexOf N C
p
listIterator(k) k C
n
iterator() 1
subList 1
size 1
isEmpty 1
contains N C
n
containsAll(c) N M C
n
addAll(c) M C
n
+N C
a
toArray N C
n
Exercises
3.1. Provide a body for the addAll method of AbstractCollection. It can as-
sume that add will either throw UnsupportedOperationException if adding to the
Collection is not supported, or will add an element.
3.2. Provide a body for the removeAll method of AbstractCollection. You
may assume that, if removal is supported, the remove operation of the result of
iterator works.
3.3. Provide a possible implementation of the java.util.SubList class. This
utility class implements List and has one constructor:
/** A view of items K0 throught K1-1 of THELIST. Subsequent
* modifications to THIS also modify THELIST. Any structural
* modification to THELIST other than through THIS and any
* iterators or sublists derived from it renders THIS invalid.
* Operations on an invalid SubList throw
* ConcurrentModificationException */
SubList (AbstractList theList, int k0, int k1) { ... }
64 CHAPTER 3. MEETING A SPECIFICATION
3.4. For class AbstractSequentialList, provide possible implementations of add(k,x)
and get. Arrange the implementation so that performing a get of an element at or
near either end of the list is fast.
3.5. Extend the AbstractMap class to produce a full implementation of Map. Try
to leave as much as possible up to AbstractMap, implementing just what you need.
For a representation, provide an implementation of Map.Entry and then use the
existing implementation of Set provided by the Java library, HashSet. Call the
resulting class SimpleMap.
3.6. In 3.5, we did not talk about the performance of operations on the Lists re-
turned by the subList method. Provide these estimates for both AbstractList and
AbstractSequentialList. For AbstractSequentialList, the time requirement
for the get method on a sublist must depend on the the rst argument to subList
(the starting point). Why is this? What change to the denition of ListIterator
could make the performance of get (and other operations) on sublists independent
on where in the original list the sublist comes from?
Chapter 4
Sequences and Their
Implementations
In Chapters 2 and 3, we saw quite a bit of the List interface and some skeleton
implementations. Here, we review the standard representations (concrete imple-
mentations) of this interface, and also look at interfaces and implementations of
some specialized versions, the queue data structures.
4.1 Array Representation of the List Interface
Most production programming languages have some built-in data structure like
the Java arraya random-access sequence of variables, indexed by integers. The
array data structure has two main performance advantages. First, it is a compact
(space-ecient) representation for a sequence of variables, typically taking little
more space than the constituent variables themselves. Second, random access to
any given variable in the sequence is a fast, constant-time operation. The chief
disadvantage is that changing the size of the sequence represented is slow (in the
worst case). Nevertheless, with a little care, we will see that the amortized cost of
operations on array-backed lists is constant.
One of the built-in Java types is java.util.ArrayList, which has, in part, the
implementation shown in Figure 4.1
1
. So, you can create a new ArrayList with its
constructors, optionally choosing how much space it initially has. Then you can add
items (with add), and the array holding these items will be expanded as needed.
What can we say about the cost of the operations on ArrayList? Obviously,
get and size are (1); the interesting one is add. As you can see, the capacity of
an ArrayList is always positive. The implementation of add uses ensureCapacity
whenever the array pointed to by data needs to expand, and it requests that the
1
The Java standard library type java.util.Vector provides essentially the same representa-
tion. It predates ArrayList and the introduction of Javas standard Collection classes, and
was retrotted to meet the List interface. As a result, many existing Java programs tend to
use Vector, and tend to use its (now redundant) pre-List operations, such as elementAt and
removeAllElements (same as get and clear). The Vector class has another dierence: it is
synchronized, whereas ArrayList is not. See 10.1 for further discussion.
65
66 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
capacity of the ArrayListthe size of datashould double whenever it needs to
expand. Lets look into the reason behind this design choice. Well consider just
the call A.add(x), which calls A.add(A.size(), x).
Suppose that we replace the lines
if (count + 1 > data.length)
ensureCapacity (data.length * 2);
with the alternative minimal expansion:
ensureCapacity (count+1);
In this case, once the initial capacity is exhausted, each add operation will expand
the array data. Lets measure the cost of add in number of assignments to array
elements [why is that reasonable?]. In Java, we can take the cost of the expression
new Object[K] to be (K). This does not change when we add in the cost of copy-
ing elements from the previous array into it (using System.arraycopy). Therefore,
the worst-case cost, C
i
, of executing A.add(x) using our simple increment-size-by-1
scheme is
C
i
(K, M) =


1
, if M > K;

2
(K + 1), if M = K
where K is A.size(), M K is As current capacity (that is, A.data.length),
and the
i
are some constants. So, conservatively speaking, we can just say that
C(K, M, I) (K).
Now lets consider the cost, C
d
, of the implementation as shown in Figure 4.1,
where we always double the capacity when it must be increased. This time, we get
C
d
(K, M) =


1
, if M > K;

3
(2K + 1), if M = K
the worst-case cost looks identical; the factor of two increase in size simply changes
the constant factor, and we can still use the same formula as before: C
d
(K, M)
O(K).
So from this nave worst-case asymptotic point of view, it would appear the two
alternative strategies have identical costs. Yet we ought to be suspicious. Con-
sider an entire series of add operations together, rather than just one. With the
increment-size-by-1 strategy, we expand every time. By constrast, with the size-
doubling strategy, we expand less and less often as the array grows, so that most
calls to add complete in constant time. So is it really accurate to characterize them
as taking time proportional to K?
Consider a series of N calls to A.add(x), starting with A an empty ArrayList
with initial capacity of M
0
< N. With the increment-by-1 strategy, call number
M
0
, (numbering from 0), M
0
+1, M
0
+2, etc. will cost time proportional to M
0
+1,
M
0
+ 2, . . . , respectively. Lets use
C

i
(N, M
0
)
4.1. ARRAY REPRESENTATION OF THE LIST INTERFACE 67
package java.util;
/** A List with a constant-time get operation. At any given time,
* an ArrayList has a capacity, which indicates the maximum
* size() for which the one-argument add operation (which adds to
* the end) will execute in constant time. The capacity expands
* automatically as needed so as to provide constant amortized
* time for the one-argument add operation. */
public class ArrayList extends AbstractList implements Cloneable {
/** An empty ArrayList whose capacity is initially at least
* CAPACITY. */
public ArrayList(int capacity) {
data = new Object[Math.max (capacity, 2)]; count = 0;
}
public ArrayList () { this (8); }
public ArrayList (Collection c) {
this (c.size ()); addAll (c);
}
public int size() { return count; }
public Object get (int k) {
check (k, count); return data[k];
}
public Object remove (int k) {
Object old = data[k];
removeRange (k, k+1);
return old;
}
public Object set (int k, Object x) {
check (k, count);
Object old = data[k];
data[k] = x;
return old;
}
Figure 4.1: Implementation of the class java.util.ArrayList.
68 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
public void add (int k, Object obj) {
check (k, count+1);
if (count + 1 > data.length)
ensureCapacity (data.length * 2);
System.arraycopy (data, k, data, k+1, count - k);
data[k] = obj; count += 1;
}
/* Cause the capacity of this ArrayList to be at least N. */
public void ensureCapacity (int N) {
if (N <= data.length)
return;
Object[] newData = new Object[N];
System.arraycopy (data, 0, newData, 0, count);
data = newData;
}
/** A copy of THIS (overrides method in Object). */
public Object clone () { return new ArrayList (this); }
protected void removeRange (int k0, int k1) {
if (k0 >= k1)
return;
check (k0, count); check (k1, count+1);
System.arraycopy (data, k1, data, k0, count - k1);
count -= k1-k0;
}
private void check (int k, int limit) {
if (k < 0 || k >= limit)
throw new IndexOutOfBoundsException ();
}
private int count; /* Current size */
private Object[] data; /* Current contents */
}
Figure 4.1, continued.
4.2. LINKING IN SEQUENTIAL STRUCTURES 69
to stand for the cost of a sequence of N add(x) operations on an ArrayList with
initial capacity M
0
, starting from an empty list, using the increment-by-1 strategy.
This is clearly
C

i
(N, M
0
) = C
i
(0, M
0
) +. . . +C
i
(M
0
1, M
0
)
+C
i
(M
0
, M
0
) +C
i
(M
0
+ 1, M
0
+ 1) +. . . +C
i
(N 1, N 1)
=
1
M
0
+
2
(M
0
+ 1 +M
0
+ 2 +. . . +N)
= M
0
(
1
+
2
(N M
0
)) +
2
(1 + 2 +. . . +N M
0
)
= M
0
(
1
+
2
(N M
0
)) +
2
(N M
0
)(N M
0
+ 1)/2
(N
2
), for xed M
0
.
The cost, in other words, is quadratic in the number of items added.
Now consider the doubling strategy. In this case, calls number M
0
, 2M
0
, 4M
0
,
etc., will incur costs proportional to 2M
0
, 4M
0
, 8M
0
, etc. (cost of doubling the
array and copying the original elements to it). All other calls will have constant
cost
1
. For simplicity, consider just sequences that end on a call that expands the
arrays. That is, N = 2
k
M
0
+ 1 for some k. For xed M
0
:
C

d
(N, M
0
) = C

d
(2
k
M
0
+ 1, M
0
)
=
1
M
0

calls #0 to M
0
1
+
3
2M
0

call #M
0
+
1
(M
0
1)

calls M
0
+ 1 to 2M
0
1
+
3
4M
0

call #2M
0
+
1
(2M
0
1)

calls #2M
0
+ 1 to 4M
0
1
+
3
8M
0

call #4M
0
+. . . +
1
(2M
0
1)

calls #2
k1
M
0
+ 1 to 2
k
M
0
1
+
3
2
k+1
M
0

call #2
k
M
0
=
1
(M
0
+M
0
+ 2M
0
+ 4M
0
+. . . + 2
k1
M
0
k + 1)
+
3
(2M
0
+ 4M
0
+. . . + 2
k+1
M
0
)
= M
0
(
1
2
k
+
3
(2
k+2
3)) k + 1
(2
k+2
) = (2
k
) = (2
k
M
0
+ 1) = (N)
Surprise! The total time is linear in the total number of adds. If we credit the
total cost of N adds equally to all N operations, each gets a constant time. For the
doubling strategy, add has constant-time amortized cost.
4.2 Linking in Sequential Structures
The term linked structure refers generally to composite, dynamically growable data
structures comprising small objects for the individual members connected together
by means of pointers (links).
70 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
4.2.1 Singly Linked Lists
The Scheme language has one pervasive compound data structure, the pair or cons
cell, which can serve to represent just about any data structure one can imagine.
Perhaps its most common use is in representing lists of things, as illustrated in
Figure 4.2a. Each pair consists of two containers, one of which is used to store a
(pointer to) the data item, and the second a pointer to the next pair in the list, or
a null pointer at the end. In Java, a rough equivalent to the pair is a class such as
the following:
class Entry {
Entry (Object head, Entry next) {
this.head = head; this.next = next;
}
Object head;
Entry next;
}
We call lists formed from such pairs singly linked, because each pair carries one
pointer (link) to another pair.
Changing the structure of a linked list (its set of containers) involves what is
colloquially known as pointer swinging. Figure 4.2 reviews the basic operations
for insertion and deletion on pairs used as lists.
4.2.2 Sentinels
As Figure 4.2 illustrates, the procedure for inserting or deleting at the beginning of
a linked list diers from the procedure for middle items, because it is the variable
L, rather than a next eld that gets changed:
L = L.next; // Remove first item of linked list pointed to by L
L = new Entry ("aardvark", L); // Add item to front of list.
We can avoid this recourse to special cases for the beginning of a list by employing
a clever trick known as a sentinel node.
The idea behind a sentinel is to use an extra object, one that does not carry
one of the items of the collection being stored, to avoid having any special cases.
Figure 4.3 illustrates the resulting representation.
Use of sentinels changes some tests. For example, testing to see if linked list L
list is empty without a sentinel is simply a matter of comparing L to null, whereas
the test for a list with a sentinel compares L.next to null.
4.2.3 Doubly Linked Lists
Singly linked lists are simple to create and manipulate, but they are at a disad-
vantage for fully implementing the Java List interface. One obvious problem is
that the previous operation on list iterators has no fast implementation on singly
linked structures. One is pretty much forced to return to the start of the list and
4.2. LINKING IN SEQUENTIAL STRUCTURES 71
L:

ax bat syzygy
(a) Original list
L:

ax bat syzygy
(b) After removing bat with L.next = L.next.next
L:

ax bat syzygy
balance
(c) After adding balance with L.next = new Entry("balance", L.next)
L:

ax bat syzygy
balance
(d) After deleting ax with L = L.next
Figure 4.2: Common operations on the singly linked list representation. Starting
from an initial list, we remove object , and then insert in its place a new one. Next
we remove the rst item in the list. The objects removed become garbage, and
are no longer reachable via L.
72 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
L:
ax bat syzygy
sentinel
(a) Three-item list
E:
sentinel
(b) Empty list
Figure 4.3: Singly linked lists employing sentinel nodes. The sentinels contain no
useful data. They allow all items in the list to treated identically, with no special
case for the rst node. The sentinel node is typically never removed or replaced
while the list is in use.
follow an appropriate number of next elds to almost but not quite return to the
current position, requiring time proportional to the size of the list. A somewhat
more subtle annoyance comes in the implementation of the remove operation on the
list iterator. To remove an item p from a singly linked list, you need a pointer to
the item before p, because it is the next eld of that object that must be modied.
Both problems are easily solved by adding a predecessor link to the objects
in our list structure, making both the items before and after a given item in the
list equally accessible. As with singly linked structures, the use of front and end
sentinels further simplies operations by removing the special cases of adding to or
removing from the beginning or end of a list. A further device in the case of doubly
linked structures is to make the entire list circular, that is, to use one sentinel as
both the front and back of the list. This cute trick saves the small amount of space
otherwise wasted by the prev link of the front sentinel and the next link of the last.
Figure 4.4 illustrates the resulting representations and the principal operations upon
it.
4.3 Linked Implementation of the List Interface
The doubly linked structure supports everything we need to do to implement the
Java List interface. The type of the links (LinkedList.Entry) is private to the
implementation. A LinkedList object itself contains just a pointer to the lists
sentinel (which never changes, once created) and an integer variable containing
the number of items in the list. Technically, of course, the latter is redundant,
since one can always count the number of items in the list, but keeping this vari-
able allows size to be a constant-time operation. Figure 4.5 illustrates the three
4.3. LINKED IMPLEMENTATION OF THE LIST INTERFACE 73

sentinel
L:
ax bat syzygy
(a) Initial list

sentinel
L:
ax bat syzygy
(b) After deleting item (bat)

sentinel
L:
ax bat syzygy
balance
(c) After adding item (balance)

sentinel
L:
(d) After removing all items, and removing garbage.
Figure 4.4: Doubly linked lists employing a single sentinel node to mark both front
and back. Shaded item is garbage.
74 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
Data structure after executing:
L = new LinkedList<String>();
L.add("axolotl");
L.add("kludge");
L.add("xerophyte");
I = L.listIterator();
I.next();

axolotl kludge xerophyte


L:
3
I: LinkedList.this
lastReturned
here
1 nextIndex
Figure 4.5: A typical LinkedList (pointed to by L and a list iterator (pointed to
by I). Since the iterator belongs to an inner class of LinkedList, it contains an
implicit private pointer (LinkedList.this) that points back to the LinkedList
object from which it was created.
main data structures involved: LinkedList, LinkedList.Entry, and the iterator
LinkedList.LinkedIter.
4.4 Specialized Lists
A common use for lists is in representing sequences of items that are manipulated
and examined only at one or both ends. Of these, the most familiar are
The stack (or LIFO queue for Last-In First Out), which supports only
adding and deleting items at one end;
The queue (or FIFO queue, for First-In First Out), which supports adding
at one end and deletion from the other; and
The deque or double-ended queue, which supports addition and deletion from
either end.
whose operations are illustrated in Figure 4.8.
4.4. SPECIALIZED LISTS 75
package java.util;
public class LinkedList<T> extends AbstractSequentialList<T>
implements Cloneable {
public LinkedList () {
sentinel = new Entry ();
size = 0;
}
public LinkedList (Collection<? extends T> c) {
this();
addAll (c);
}
public ListIterator<T> listIterator (int k) {
if (k < 0 || k > size)
throw new IndexOutOfBoundsException ();
return new LinkedIter (k);
}
public Object clone () {
return new LinkedList (this);
}
public int size () { return size; }
private static class Entry<E> {
E data;
Entry prev, next;
Entry (E data, Entry<E> prev, Entry<E> next) {
this.data = data; this.prev = prev; this.next = next;
}
Entry () { data = null; prev = next = this; }
}
private class LinkedIter implements ListIterator {
See Figure 4.7.
}
private final Entry<T> sentinel;
private int size;
}
Figure 4.6: The class LinkedList.
76 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
package java.util;
public class LinkedList<T> extends AbstractSequentialList<T>
implements Cloneable {
.
.
.
private class LinkedIter<E> implements ListIterator<E> {
Entry<E> here, lastReturned;
int nextIndex;
/** An iterator whose initial next element is item
* K of the containing LinkedList. */
LinkedIter (int k) {
if (k > size - k) { // Closer to the end
here = sentinel; nextIndex = size;
while (k < nextIndex) previous ();
} else {
here = sentinel.next; nextIndex = 0;
while (k > nextIndex) next ();
}
lastReturned = null;
}
public boolean hasNext () { return here != sentinel; }
public boolean hasPrevious () { return here.prev != sentinel; }
public E next () {
check (here);
lastReturned = here;
here = here.next; nextIndex += 1;
return lastReturned.data;
}
public E previous () {
check (here.prev);
lastReturned = here = here.prev;
nextIndex -= 1;
return lastReturned.data;
}
Figure 4.7: The inner class LinkedList.LinkedIter. This version does not check
for concurrent modication of the underlying List.
4.4. SPECIALIZED LISTS 77
public void add (T x) {
lastReturned = null;
Entry<T> ent = new Entry<T> (x, here.prev, here);
nextIndex += 1;
here.prev.next = here.prev = ent;
size += 1;
}
public void set (T x) {
checkReturned ();
lastReturned.data = x;
}
public void remove () {
checkReturned ();
lastReturned.prev.next = lastReturned.next;
lastReturned.next.prev = lastReturned.prev;
if (lastReturned == here)
here = lastReturned.next;
else
nextIndex -= 1;
lastReturned = null;
size -= 1;
}
public int nextIndex () { return nextIndex; }
public int previousIndex () { return nextIndex-1; }
void check (Object p) {
if (p == sentinel) throw new NoSuchElementException ();
}
void checkReturned () {
if (lastReturned == null) throw new IllegalStateException ();
}
}
Figure 4.7, continued.
78 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
D
C
B
A
push pop
(a) Stack
A B C D
add
removeFirst
(b) (FIFO) Queue
A B C D
add
removeFirst removeLast
addFirst
(c) Deque
Figure 4.8: Three varieties of queuessequential data structures manipulated only
at their ends.
4.4.1 Stacks
Java provides a type java.util.Stack as an extension of the type java.util.Vector
(itself an older variation of ArrayList):
package java.util;
public class Stack<T> extends Vector<T> {
/** An empty Stack. */
public Stack () { }
public boolean empty () { return isEmpty (); }
public T peek () { check (); return get (size () - 1); }
public T pop () { check (); return remove (size () - 1); }
public T push (T x) { add (x); return x; }
public int search (Object x) {
int r = lastIndexOf (x);
return r == -1 ? -1 : size () - r;
}
private void check () {
if (empty ()) throw new EmptyStackException ();
}
}
However, because it is one of the older types in the library, java.util.Stack
does not t in as well as it might. In particular, there is no separate interface
describing stackness. Instead there is just the Stack class, inextricably combining
an interface with an implementation. Figure 4.9 shows how a Stack interface (in
the Java sense) might be designed.
Stacks have numerous uses, in part because of their close relationship to recur-
sion and backtracking search. Consider, for example, a simple-minded strategy for
nding an exit to a maze. We assume some Maze class, and a Position class that
represents a position in the maze. From any position in the maze, you may be able
to move in up to four dierent directions (represented by numbers 04, standing
perhaps for the compass points north, east, south, and west). The idea is that we
4.4. SPECIALIZED LISTS 79
package ucb.util;
import java.util.*;
/** A LIFO queue of Ts. */
public interface Stack<T> {
/** True iff THIS is empty. */
boolean isEmpty ();
/** Number of items in the stack. */
int size ();
/** The last item inserted in the stack and not yet removed. */
T top ();
/** Remove and return the top item. */
T pop ();
/** Add X as the last item of THIS. */
void push (T x);
/** The index of the most-recently inserted item that is .equal to
* X, or -1 if it is not present. Item 0 is the least recently
* pushed. */
int lastIndexOf (Object x);
}
Figure 4.9: A possible denition of the abstract type Stack as a Java interface.
This is not part of the Java library, but its method names are more traditional than
those of Javas ocial java.util.Stack type. It is designed, furthermore, to t in
with implementations of the List interface.
80 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
leave bread crumbs to mark each position weve already visited. From each position
we visit, we try stepping in each of the possible directions and continuing from that
point. If we nd that we have already visited a position, or we run out of directions
to go from some position, we backtrack to the last position we visited before that
and continue with the directions we havent tried yet from that previous position,
stopping when we get to an exit (see Figure 4.10). As a program (using method
names that I hope are suggestive), we can write this in two equivalent ways. First,
recursively:
/** Find an exit from M starting from PLACE. */
void findExit(Maze M, Position place) {
if (M.isAnExit (place))
M.exitAt (place);
if (! M.isMarkedAsVisited (place)) {
M.markAsVisited (place);
for (dir = 0; dir < 4; dir += 1)
if (M.isLegalToMove (place, dir))
findExit (M, place.move (dir));
}
}
Second, an iterative version:
import ucb.util.Stack;
import ucb.util.ArrayStack;
/** Find an exit from M starting from PLACE. */
void findExit(Maze M, Position place0) {
Stack<Position> toDo = new ArrayStack<Position> ();
toDo.push (place0);
while (! toDo.isEmpty ()) {
Position place = toDo.pop ();
if (M.isAnExit (place))
M.exitAt (place);
if (! M.isMarkedAsVisited (place)) {
M.markAsVisited (place);
for (dir = 3; dir >= 0; dir -= 1)
if (M.isLegalToMove (place, dir))
toDo.push (place.move (dir));
}
}
}
where ArrayStack is an implementation of ucb.util.Stack (see 4.5).
The idea behind the iterative version of findExit is that the toDo stack keeps
track of the values of place that appear as arguments to findExit in the recursive
version. Both versions visit the same positions in the same order (which is why
4.5. STACK, QUEUE, AND DEQUE IMPLEMENTATION 81
0 1
2
3
4
5
6
7 8 9 10 11
12
13
14 15 16
Figure 4.10: Example of searching a maze using backtracking search (the findExit
procedure from the text). We start in the lower-left corner. The exit is the dark
square on the right. The lightly shaded squares are those visited by the algorithm,
assuming that direction 0 is up, 1 is right, 2 is down, and 3 is left. The numbers in
the squares show the order in which the algorithm rst visits them.
the loop runs backwards in the iterative version). In eect, the toDo plays the
role of the call stack in the recursive version. Indeed, typical implementations of
recursive procedures also use a stack for this purpose, although it is invisible to the
programmer.
4.4.2 FIFO and Double-Ended Queues
A rst-in, rst-out queue is what we usually mean by queue in informal English (or
line in American English): people or things join a queue at one end, and leave it at
the other, so that the rst to arrive (or enqueue) are the rst to leave (or dequeue).
Queues appear extensively in programs, where they can represent such things as
sequences of requests that need servicing. The Java library (as of Java 2, version
1.5) provides a standard FIFO queue interface, but it is intended specically for
uses in which a program might have to wait for an element to get added to the
queue. Figure 4.11 shows a more classic possible interface.
The deque, which is the most general, double-ended queue, probably sees rather
little explicit use in programs. It uses even more of the List interface than does the
FIFO queue, and so the need to specialize is not particularly acute. Nevertheless,
for completeness, I have included a possible interface in Figure 4.12.
4.5 Stack, Queue, and Deque Implementation
We could implement a concrete stack class for our ucb.util.Stack interface as
in Figure 4.13: as an extension of ArrayList just as java.util.Stack is an ex-
tension of java.util.Vector. As you can see, the names of the Stack interface
methods are such that we can simply inherit implementations of size, isEmpty,
and lastIndexOf from ArrayList.
But lets instead spice up our implementation of ArrayStack with a little gener-
alization. Figure 4.14 illustrates an interesting kind of class known as an adapter or
wrapper (another of the design patterns introduced at the beginning of Chapter 3).
The class StackAdapter shown there will make any List object look like a stack.
The gure also shows an example of using it to make a concrete stack representation
out of the ArrayList class.
82 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
package ucb.util;
/** A FIFO queue */
public interface Queue<T> {
/** True iff THIS is empty. */
boolean isEmpty ();
/** Number of items in the queue. */
int size ();
/** The first item inserted in the stack and not yet removed.
* Requires !isEmpty (). */
T first ();
/** Remove and return the first item. Requires !isEmpty (). */
T removeFirst ();
/** Add X as the last item of THIS. */
void add (T x);
/** The index of the first (least-recently inserted) item that is
* .equal to X, or -1 if it is not present. Item 0 is first. */
int indexOf (Object x);
/** The index of the last (most-recently inserted) item that is
* .equal to X, or -1 if it is not present. Item 0 is first. */
int lastIndexOf (Object x);
}
Figure 4.11: A possible FIFO (First In, First Out) queue interface.
package ucb.util;
/** A double-ended queue */
public interface Deque<T> extends Queue<T> {
/** The last inserted item in the sequence. Assumes !isEmpty(). */
T last ();
/** Insert X at the beginning of the sequence. */
void addFirst (T x);
/** Remove the last item from the sequence. Assumes !isEmpty(). */
T removeLast ();
/* Plus inherited definitions of isEmpty, size, first, add,
* removeFirst, indexOf, and lastIndexOf */
}
Figure 4.12: A possible Deque (double-ended queue) interface
4.5. STACK, QUEUE, AND DEQUE IMPLEMENTATION 83
public class ArrayStack<T>
extends java.util.ArrayList<T> implements Stack<T>
{
/** An empty Stack. */
public ArrayStack () { }
public T top () { check (); return get (size () - 1); }
public T pop () { check (); return remove (size () - 1); }
public void push (T x) { add (x); }
private void check () {
if (empty ()) throw new EmptyStackException ();
}
}
Figure 4.13: An implementation of ArrayStack as an extension of ArrayList.
package ucb.util;
import java.util.*;
public class StackAdapter<T> implements Stack<T> {
public StackAdapter (List<T> rep) { this.rep = rep; }
public boolean isEmpty () { return rep.isEmpty (); }
public int size () { return rep.size (); }
public T top () { return rep.get (rep.size () - 1); }
public T pop () { return rep.remove (rep.size () - 1); }
public void push (T x) { rep.add (x); }
public int lastIndexOf (Object x) { return rep.lastIndexOf (); }
}
public class ArrayStack extends StackAdapter {
public ArrayStack () { this (new ArrayList ()); }
}
Figure 4.14: An adapter class that makes any List look like a Stack, and an ex-
ample of using it to create an array-based implementation of the ucb.util.Stack
interface.
84 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
Likewise, given any implementation of the List interface, we can easily provide
implementations of Queue or Deque, but there is a catch. Both array-based and
linked-list-based implementations of List will support our Stack interface equally
well, giving push and pop methods that operate in constant amortized time. How-
ever, using an ArrayList in the same nave fashion to implement either of the
Queue or Deque interface gives very poor performance. The problem is obvious: as
weve seen, we can add or remove from the end (high index) of an array quickly,
but removing from the other (index 0) end requires moving over all the elements of
the array, which takes time (N), where N is the size of the queue. Of course, we
can simply stick to LinkedLists, which dont have this problem, but there is also
a clever trick that makes it possible to represent general queues eciently with an
array.
Instead of shifting over the items of a queue when we remove the rst, lets
instead just change our idea of where in the array the queue starts. We keep two
indices into the array, one pointing to the rst enqueued item, and one to the last.
These two indices chase each other around the array, circling back to index 0
when they pass the high-index end, and vice-versa. Such an arrangement is known
as a circular buer. Figure 4.15 illustrates the representation. Figure 4.16 shows
part of a possible implementation.
4.5. STACK, QUEUE, AND DEQUE IMPLEMENTATION 85
a.
rst last
b. B C D E
rst last
c. C D E
rst last
d. F C D E I H G
rst last
e. F I H G
rst last
f. I H G
rst last
g.
rst last
Figure 4.15: Circular-buer representation of a deque with N==7. Part (a) shows
an initial empty deque. In part (b), weve inserted four items at the end. Part (c)
shows the result of removing the rst item. Part (d) shows the full deque resulting
from adding four items to the front. Removing the last three items gives (e), and
after removing one more we have (f). Finally, removing the rest of the items from
the end gives the empty deque shown in (g).
86 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
class ArrayDeque<T> implements Deque<T> {
/** An empty Deque. */
public ArrayDeque(int N) {
first = 0; last = N; size = 0;
}
public int size () {
return size;
}
public boolean isEmpty() {
return size == 0;
}
public T first() {
return data.get (first);
}
public T last() {
return data.get (last);
}
Figure 4.16: Implementation of Deque interface using a circular buer.
4.5. STACK, QUEUE, AND DEQUE IMPLEMENTATION 87
public void add (T x) {
size += 1;
resize ();
last = (last+1 == data.size ()) ? 0 : last+1;
data.put (last, x);
}
public void addFirst (T x) {
size += 1;
resize ();
first = (first == 0) ? data.size () - 1 : first-1;
data.put (first,x) ;
}
public T removeLast () {
T val = last ();
last = (last == 0) ? data.size ()-1 : last-1;
return val;
}
public T removeFirst () {
T val = first ();
first = (first+1 == data.size ()) ? 0 : first+1;
return val;
}
private int first, last;
private final ArrayList<T> data = new ArrayList<T> ();
private int size;
/** Insure that DATA has at least size elements. */
private void resize () { left to the reader }
etc.
}
Figure 4.16, continued.
88 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
Exercises
4.1. Implement a type Deque, as an extension of java.util.Vector. To the
operations required by java.util.AbstractList, add first, last, insertFirst,
insertLast, removeFirst, removeLast, and do this in such a way that all the
operations on Vector continue to work (e.g., get(0) continues to get the same
element as first()), and such that the amortized cost for all these operations
remains constant.
4.2. Implement a type of List with the constructor:
public ConcatList (List<T> L0, List<T> L1) { ... }
that does not support the optional operations for adding and removing objects, but
gives a view of the concatenation of L0 and L1. That is, get(i) on such a list gives
element i in the concatenation of L0 and L1 at the time of the get operation (that
is, changes to the lists referenced by L0 and L1 are reected in the concatenated
list). Be sure also to make iterator and listIterator work.
4.3. A singly linked list structure can be circular. That is, some element in the list
can have a tail (next) eld that points to an item earlier in the list (not necessarily
to the rst element in the list). Come up with a way to detect whether there is such
a circularity somewhere in a list. Do not, however, use any destructive operations
on any data structure. That is, you cant use additional arrays, lists, Vectors, hash
tables, or anything like them to keep track of items in the list. Use just simple
list pointers without changing any elds of any list. See CList.java in the hw5
directory.
4.4. The implementations of LinkedList in Figure 4.6 and LinkedList.LinkedIter
in Figure 4.7 do not provide checking for concurrent modication of the underlying
list. As a result, a code fragment such as
for (ListIterator<Object> i = L.listIterator (); i.hasNext (); ) {
if (bad (i.next ()))
L.remove (i.previousIndex ());
}
can have unexpected eects. What is supposed to happen, according to the speci-
cation for LinkedList, is that i becomes invalid as soon as you call L.remove, and
subsequent calls on methods of i will throw ConcurrentModificationExceptions.
a. For the LinkedList class, what goes wrong with the loop above, and why?
b. Modify our LinkedList implementation to perform the check for concurrent
modication (so that the loop above throws ConcurrentModificationException).
4.5. Devise a DequeAdapter class analogous to StackAdapter, that allows one to
create deques (or queues) from arbitrary List objects.
4.5. STACK, QUEUE, AND DEQUE IMPLEMENTATION 89
4.6. Provide an implementation for the resize method of ArrayDeque (Figure 4.16).
Your method should double the size of the ArrayList being used to represent the
circular buer if expansion is needed. Be careful! You have to do more than simply
increase the size of the array, or the representation will break.
90 CHAPTER 4. SEQUENCES AND THEIR IMPLEMENTATIONS
Chapter 5
Trees
In this chapter, well take a break from the denition of interfaces to libraries and
look at one of the basic data-structuring tools used in representing searchable collec-
tions of objects, expressions, and other hierarchical structures, the tree. The term
tree refers to several dierent variants of what we will later call a connected, acyclic,
undirected graph. For now, though, lets not call it that and instead concentrate
on two varieties of rooted tree. First,
Denition: an ordered tree consists of
A. A node
1
, which may contain a piece of data known as a label. De-
pending on the application, a node may stand for any number of
things and the data labeling it may be arbitrarily elaborate. The
node part of a tree is known as its root node or root.
B. A sequence of 0 or more trees, whose root nodes are known as the
children of the root. Each node in a tree is the child of at most
one nodeits parent. The children of any node are siblings of each
other
2
.
The number of children of a node is known as the degree of that node. A
node with no children is called a leaf (node), external node, or terminal
node; all other nodes are called internal or non-terminal nodes.
We usually think of there being connections called edges between each node and
its children, and often speak of traversing or following an edge from parent to child
or back. Starting at any node, r, there is a unique, non-repeating path or sequence
of edges leading from r to any other node, n, in the tree with r as root. All nodes
along that path, including r and n, are called descendents of r, and ancestors of n.
A descendent of r is a proper descendent if it is not r itself; proper ancestors are
dened analogously. Any node in a tree is the root of a subtree of that tree. Again,
a proper subtree of a tree is one that not equal to (and is therefore smaller than)
1
The term vertex may also be used, as it is with other graphs, but node is traditional for trees.
2
The word father has been used in the past for parent and son for child. Of course, this is no
longer considered quite proper, although I dont believe that the freedom to live ones life as a tree
was ever an ocial goal of the womens movement.
91
92 CHAPTER 5. TREES
0
1
2
3
4
5
6
7 8 9
height=3
Level 0 Path length: 18
Level 1 External path length: 12
Level 2 Internal path length: 6
Level 3
Figure 5.1: An illustrative ordered tree. The leaf nodes are squares. As is tradtional,
the tree grows downward.
the tree. Any set of disjoint trees (such as the trees rooted at all the children of a
node) is called a forest.
The distance from a node, n, to the root, r, of a treethe number of edges that
must be followed to get from n to ris the level (or depth) of that node in the tree.
The maximum level of all nodes in a tree is called the height of the tree. The sum
of the levels of all nodes in a tree is the path length of the tree. We also dene of
the internal (external) path length as the sum of the levels of all internal (external)
nodes. Figure 5.1 illustrates these denitions. All the levels shown in that gure
are relative to node 0. It also makes sense to talk about the level of node 7 in the
tree rooted at node 1, which would be 2.
If you look closely at the denition of ordered tree, youll see that it has to have
at least one node, so that there is no such thing as an empty ordered tree. Thus,
child number j of a node with k > j children is always non-empty. It is easy enough
to change the denition to allow for empty trees:
Denition: A positional tree is either
A. Empty (missing), or
B. A node (generally labeled) and, for every non-negative integer, j,
a positional treethe j
th
child.
The degree of a node is the number of non-empty children. If all nodes
in a tree have children only in positions < k, we say it is a k-ary tree.
Leaf nodes are those with no non-empty children; all others are internal
nodes.
Perhaps the most important kind of positional tree is the binary tree, in which
k = 2. For binary trees, we generally refer to child 0 and child 1 as the left and
right children, respectively.
A full k-ary tree is one in which all internal nodes except possibly the rightmost
bottom one have degree k. A tree is complete if it is full and all its leaf nodes occur
last when read top to bottom, left to right, as in Figure 5.2c. Complete binary trees
are of interest because they are, in some sense, maximally bushy; for any given
number of internal nodes, they minimize the internal path length of the tree, which
5.1. EXPRESSION TREES 93
0
1
3
4
6
2
5
(a)
0
1
3
4
7 8
2
5 6
(b)
0
1
3
7 8
4
9 10
2
5 6
(c)
Figure 5.2: A forest of binary trees: (a) is not full; (b) is full, but not complete;
(c) is complete. Tree (c) would still be complete if node 10 were missing, but not if
node 9 were missing.
is interesting because it is proportional to the total time required to perform the
operation of moving from the root to an internal node once for each internal node
in the tree.
5.1 Expression trees
Trees are generally interesting when one is trying to represent a recursively-dened
type. A familiar example is the expression tree, which represents an expression
recursively dened as
An identier or constant, or
An operator (which stands for some function of k arguments) and k expressions
(which are its operands).
Given this denition, expressions are conveniently represented by trees whose in-
ternal nodes contain operators and whose external nodes contain identiers or con-
stants. Figure 5.3 shows a representation of the expression x*(y + 3) - z.
-
*
x
+
y 3
z
Figure 5.3: An expression tree for x*(y+3)-z.
As an illustration of how one deals with trees, consider the evaluation of ex-
pressions. As often happens, the denition of the value denoted by an expression
corresponds closely to the structure of an expression:
94 CHAPTER 5. TREES
The value of a constant is the value it denotes as a numeral. The value of a
variable is its currently-dened value.
The value of an expression consisting of an operator and operand expressions
is the result of applying the operator to the values of the operand expressions.
This denition immediately suggests a program.
/** The value currently denoted by the expression E (given
* current values of any variables). Assumes E represents
* a valid expression tree, and that all variables
* contained in it have values. */
static int eval(Tree E)
{
if (E.isConstant ())
return E.valueOf ();
else if (E.isVar ())
return currentValueOf (E.variableName ());
else
return perform (E.operator (),
eval (E.left ()), eval (E.right ()));
}
Here, we posit the existence of a denition of Tree that provides operators for
detecting whether E is a leaf representing a constant or variable, for extracting the
datavalues, variable names, or operator namesstored at E, and for nding the
left and right children of E (for internal nodes). We assume also that perform takes
an operator name (e.g., "+") and two integer values, and performs the indicated
computation on those integers. The correctness of this program follows immediately
by using induction on the structure of trees (the trees rooted at a nodes children
are always subtrees of the nodes tree), and by observing the match between the
denition of the value of an expression and the program.
5.2 Basic tree primitives
There are a number of possible sets of operations one might dene on trees, just
as there were for sequences. Figure 5.4 shows one possible class (assuming integer
labels). Typically, only some of the operations shown would actually be provided in
a given application. For binary trees, we can be a bit more specialized, as shown in
Figure 5.5. In practice, we dont often dene BinaryTree as an extension of Tree,
but Ive done so here just as an illustration.
5.2. BASIC TREE PRIMITIVES 95
/** A positional tree with labels of type T. The empty tree is null. */
class Tree<T> {
/** A leaf node with given LABEL */
public Tree(T label) ...
/** An internal node with given label, and K empty children */
public Tree(T label, int k) ...
/** The label of this node. */
public T label() ...
/** The number of non-empty children of this node. */
public int degree() ...
/** Number of children (argument K to the constructor) */
public int numChildren() ...
/** Child number K of this. */
public Tree<T> child(int k) ...
/** Set child number K of this to C, 0<=K<numChildren().
* C must not already be in this tree, or vice-versa. */
public void setChild(int k, Tree<T> C) ...
}
Figure 5.4: A class representing positional tree nodes.
class BinaryTree<T> extends Tree<T> {
public BinaryTree(T label,
BinaryTree<T> left, BinaryTree<T> right) {
super(label, 2);
child(0, left); child(1, right);
}
public BinaryTree<T> left() { return (BinaryTree) child(0); }
public void setLeft(BinaryTree<T> C) { setChild(0, C); }
public BinaryTree<T> right() { return (BinaryTree) child(1); }
public void setRight(BinaryTree<T> C) { setChild(1, C); }
}
Figure 5.5: A possible class representing binary trees.
96 CHAPTER 5. TREES
The operations so far all assume root-down processing of the tree, in which it is
necessary to proceed from parent to child. When it is more appropriate to go the
other way, the following operations are useful as an addition to (or substitute for)
for the constructors and child methods of Tree.
/** The parent of T, if any (otherwise null). */
public Tree<T> parent() ...
/** Sets parent() to P. */
public void setParent(Tree<T> P);
/** A leaf node with label L and parent P */
public Tree(T L, Tree<T> P);
5.3 Representing trees
As usual, the representation one uses for a tree depends in large part upon the uses
one has for it.
5.3.1 Root-down pointer-based binary trees
For doing the traversals on binary trees described below (5.4), a straightforward
transcription of the recursive denition is often appropriate, so that the elds are
T L; /* Data stored at node. */
BinaryTree<T> left, right; /* Left and right children */
As I said about the sample denition of BinaryTree, this specialized representation
is in practice more common than simply re-using the implementation of Tree. If the
parent operation is to be supported, of course, we can add an additional pointer:
BinaryTree<T> parent; // or Tree, as appopriate
5.3.2 Root-down pointer-based ordered trees
The elds used for BinaryTree are also useful for certain non-binary trees, thanks
to the leftmost-child, right-sibling representation. Assume that we are representing
an ordered tree in which each internal node may have any number of children. We
can have left for any node point to child #0 of the node and have right point to
the next sibling of the node (if any), illustrated in Figure 5.6.
A small example might be in order. Consider the problem of computing the
sum of all the node values in a tree whose nodes contain integers (for which well
use the library class Integer, since our labels have to be Objects). The sum of all
nodes in a tree is the sum of the value in the root plus the sum of the values in all
children. We can write this as follows:
5.3. REPRESENTING TREES 97
0
1 2 3
4 5 6
9 7 8
Figure 5.6: Using a binary tree representation to represent an ordered tree of ar-
bitrary degree. The tree represented is the one from Figure 5.1. Left (down) links
point to the rst child of a node, and right links to the next sibling.
/** The sum of the values of all nodes of T, assuming T is an
ordered tree with no missing children. */
static int treeSum(Tree<Integer> T)
{
int S;
S = T.label();
for (int i = 0; i < T.degree(); i += 1)
S += treeSum(T.child(i));
return S;
}
(Javas unboxing operations are silently at work here turning Integer labels into
ints.)
An interesting side light is that the inductive proof of this program contains no
obvious base case. The program above is nearly a direct transcription of the sum
of the value in the root plus the sum of the values in all children.
5.3.3 Leaf-up representation
For applications where parent is the important operation, and child is not, a
dierent representation is useful.
T label;
Tree<T> parent; /* Parent of current node */
Here, child is an impossible operation.
The representation has a rather interesting advantage: it uses less space; there is
one fewer pointers in each node. If you think about it, this might at rst seem odd,
since each representation requires one pointer per edge. The dierence is that the
parental representation does not need null pointers in all the external nodes, only
in the root node. We will see applications of this representation in later chapters.
98 CHAPTER 5. TREES
5.3.4 Array representations of complete trees
When a tree is complete, there is a particularly compact representation using an
array. Consider the complete tree in Figure 5.2c. The parent of any node numbered
k > 0 in that gure is node number (k 1)/2| (or in Java, (k-1)/2 or (k-1)>>1);
the left child of node k is 2k+1 and the right is 2k+2. Had we numbered the nodes
from 1 instead of 0, these formulae would have been even simpler: k/2| for the
parent, 2k for the left child, and 2k + 1 for the right. As a result, we can represent
such complete trees as arrays containing just the label information, using indices
into the array as pointers. Both the parent and child operations become simple.
Of course, one must be careful to maintain the completeness property, or gaps will
develop in the array (indeed, for certain incomplete trees, it can require an array
with 2
h
1 elements to represent a tree with h nodes).
Unfortunately, the headers needed to eect this representation dier slightly
from the ones above, since accessing an element of a tree represented by an array
in this way requires three pieces of informationan array, an upper bound, and an
indexrather than just a single pointer. In addition, we presumably want routines
for allocating space for a new tree, specifying in advance its size. Here is an example,
with some of the bodies supplied as well.
/** A BinaryTree2<T> is an entire binary tree with labels of type T.
The nodes in it are denoted by their depth-first number in
a complete tree. */
class BinaryTree2<T> {
protected T[] label;
protected int size;
/** A new BinaryTree2 with room for N labels. */
public BinaryTree2(int N) {
label = (T[]) new Object[N]; size = 0;
}
public int currentSize() { return size; }
public int maxSize() { return label.length; }
/** The label of node K in breadth-first order.
* Assumes 0 <= k < size. */
public T label(int k) { return label[k]; }
/** Cause label(K) to be VAL. */
public void setLabel(int k, T val) { label[k] = val; }
public int left(int k) { return 2*k+1; }
public int right(int k) { return 2*k+2; }
public int parent(int k) { return (k-1)/2; }
Continues. . .
5.3. REPRESENTING TREES 99
0 1 2 3 4 5 6 7 8 9 10
Figure 5.7: The binary tree in Figure 5.2c, represented with an array. The labels
of the nodes happen to equal their breadth-rst positions. In that gure, nodes 7
and 8 are the left and right children of node 3. Therefore, their labels appear at
positions 7 (3 2 + 1) and 8 (3 2 + 2).
Continuation of BinaryTree2<T>:
/** Add one more node to the tree, the next in breadth-first
* order. Assumes currentSize() < maxSize(). */
public void extend(T label) {
this.label[size] = label; size += 1;
}
}
We will see this array representation later, when we deal with heap data structures.
The representation of the binary tree in Figure 5.2c is illustrated in Figure 5.7.
5.3.5 Alternative representations of empty trees
In our representations, the empty tree tends to have a special status. For ex-
ample, we can formulate a method to access the left node of a tree, T, with the
syntax T.left(), but we cant write a method such that T.isEmpty() is true i
T references the empty tree. Instead, we must write T == null. The reason, of
course, is that we represent the empty tree with the null pointer, and no instance
methods are dened on the null pointers dynamic type (more concretely, we get a
NullPointerException if we try). If the null tree were represented by an ordinary
object pointer, it wouldnt need a special status.
For example, we could extend our denition of Tree from the beginning of 5.2
as follows:
class Tree<T> {
...
public final Tree<T> EMPTY = new EmptyTree<T> ();
/** True iff THIS is the empty tree. */
public boolean isEmpty () { return false; }
private static class EmptyTree<T> extends Tree<T> {
/** The empty tree */
private EmptyTree () { }
public boolean isEmpty () { return true; }
public int degree() { return 0; }
public int numChildren() { return 0; }
/** The kth child (always an error). */
public Tree<T> child(int k) {
100 CHAPTER 5. TREES
throw new IndexOutOfBoundsException ();
}
/** The label of THIS (always an error). */
public T label () {
throw new IllegalStateException ();
}
}
}
There is only one empty tree (guaranteed because the EmptyTree class is private
to the Tree class, an example of the Singleton design pattern), but this tree is a
full-edged object, and we will have less need to make special tests for null to avoid
exceptions. Well be extending this representation further in the discussions of tree
traversals (see 5.4.2).
5.4 Tree traversals.
The function eval in 5.1 traverses (or walks) its argumentthat is, it processes
each node in the tree. Traversals are classied by the order in which they process
the nodes of a tree. In the program eval, we rst traverse (i.e., evaluate in this
case) any children of a node, and then perform some processing on the results
of these traversals and other data in the node. The latter processing is known
generically as visiting the node. Thus, the pattern for eval is traverse the children
of the node, then visit the node, an order known as postorder. One could also
use postorder traversal for printing out the expression tree in reverse Polish form,
where visiting a node means printing its contents (the tree in Figure 5.3 would come
out as x y 3 + * z -). If the primary processing for each node (the visitation)
occurs before that of the children, giving the pattern visit the node, then traverse
its children, we get what is known as preorder traversal. Finally, the nodes in
Figures 5.1 and 5.2 are all numbered in level order or breadth-rst order, in which
a nodes at a given level of the tree are visited before any nodes at the next.
All of the traversal orders so far make sense for any kind of tree weve considered.
There is one other standard traversal ordering that applies exclusively to binary
trees: the inorder or symmetric-order traversal. Here, the pattern is traverse the
left child of the node, visit the node, and then traverse the right child. In the case
of expression trees, for example, such an order would reproduce the represented
expression in inx order. Actually, thats not quite accurate, because to get the
expression properly parenthesized, the precise operation would have to be something
like write a left parenthesis, then traverse the left child, then write the operator,
then traverse the right child, then write a right parenthesis, in which the node
seems to be visited several times. However, although such examples have led to at
least one attempt to introduce a more general notation for traversals
3
, we usually
3
For example, Wulf, Shaw, Hilnger, and Flon used such a classication scheme in Fundamental
Structures of Computer Science (Addison-Wesley, 1980). Under that system, a preorder traversal
is NLR (for visit Node, traverse Left, traverse Right), postorder is LRN, inorder is LNR, and the
5.4. TREE TRAVERSALS. 101
6
3
0
2
1
5
4
Postorder
0
1
2
3
4
5
6
Preorder
4
1
0
3
2
5
6
inorder
Figure 5.8: Order of node visitation in postorder, preorder, and inorder traversals.
just classify them approximately in one of the categories described above and leave
it at that.
Figure 5.8 illustrates the nodes of several binary trees numbered in the order
they would be visited by preorder, inorder, and postorder tree traversals.
5.4.1 Generalized visitation
Ive been deliberately vague about what visiting means, since tree traversal is
a general concept that is not specic to any particular action at the tree nodes.
In fact, it is possible to write a general denition of traversal that takes as a pa-
rameter the action to be visited upon each node of the tree. In languages that,
like Scheme, have function closures, we simply make the visitation parameter be a
function parameter, as in
;; Visit the nodes of TREE, applying VISIT to each in inorder
(define inorder-walk (tree visit)
(if (not (null? tree))
(begin (inorder-walk (left tree) visit)
(visit tree)
(inorder-walk (right tree) visit))))
so that printing all the nodes of a tree, for example is just
(inorder-walk myTree (lambda (x) (display (label x)) (newline)))
Rather than using functions, in Java we use objects (as for the java.util.Comparator
interface in 2.2.4). For example, we can dene interfaces such as
public interface TreeVisitor<T> {
void visit (Tree<T> node);
}
public interface BinaryTreeVisitor<T> {
void visit (BinaryTree<T> node);
}
true order for writing parenthesized expressions is NLNRN. This nomenclature seems not to have
caught on.
102 CHAPTER 5. TREES
The inorder-walk procedure above becomes
static <T> BinaryTreeVisitor<T>
inorderWalk (BinaryTree<T> tree,
BinaryTreeVisitor<T> visitor)
{
if (tree != null) {
inorderWalk (tree.left (), visitor);
visitor.visit (tree);
inorderWalk (tree.right (), visitor);
}
return visitor;
}
and our sample call is
inorderWalk (myTree, new PrintNode ());
where myTree is, lets say, a BinaryTree<String> and we have dened
class PrintNode implements BinaryTreeVisitor<String> {
public void visit (BinaryTree<String> node) {
System.out.println (node.label ());
}
}
Clearly, the PrintNode class could also be used with other kinds of traverals. Al-
ternatively, we can leave the visitor anonymous, as it was in the original Scheme
program:
inorderWalk (myTree,
new BinaryTreeVisitor<String> () {
public void visit (BinaryTree<String> node) {
System.out.println (node.label ());
}
});
The general idea of encapsulating an operation as weve done here and then
carrying it to each item in a collection is another design pattern known simply as
Visitor.
5.4. TREE TRAVERSALS. 103
By adding state to a visitor, we can use it to accumulate results:
/** A TreeVisitor that concatenates the labels of all nodes it
* visits. */
public class ConcatNode implements BinaryTreeVisitor<String> {
private StringBuffer result = new StringBuffer ();
public void visit (BinaryTree<String> node) {
if (result.length () > 0)
result.append (", ");
result.append (node.label ());
}
public String toString () { return result.toString (); }
}
With this denition, we can print a comma-separated list of the items in myTree in
inorder:
System.out.println (inorderWalk (myTree, new ConcatNode ()));
(This example illustrates why I had inorderWalk return its visitor argument. I
suggest that you go through all the details of why this example works.)
5.4.2 Visiting empty trees
I dened the inorderWalk method of 5.4.1 to be a static (class) method rather
than an instance method in part to make the handling of null trees clean. If we use
the alternative empty-tree representation of 5.3.5, on the other hand, we can avoid
special-casing the null tree and make traversal methods be part of the Tree class.
For example, here is a possible preorder-walk method:
class Tree<T> {

public TreeVisitor<T> preorderWalk (TreeVisitor<T> visitor) {
visitor.visit (this);
for (int i = 0; i < numChildren (); i += 1)
child(i).preorderWalk (visitor);
return visitor;
}

private static class EmptyTree<T> extends Tree<T> {

public TreeVisitor<T> preorderWalk (TreeVisitor<T> visitor) {
return visitor;
}
}
}
Here you see that there are no explicit tests for the empty tree at all; everything is
implicit in which of the two versions of preorderWalk get called.
104 CHAPTER 5. TREES
import java.util.Stack;
public class PreorderIterator<T> implements Iterator<T> {
private Stack<BinaryTree<T>> toDo = new Stack<BinaryTree<T>> ();
/** An Iterator that returns the labels of TREE in
* preorder. */
public PreorderIterator (BinaryTree<T> tree) {
if (tree != null)
toDo.push (tree);
}
public boolean hasNext () {
return ! toDo.empty ();
}
public T next () {
if (toDo.empty ())
throw new NoSuchElementException ();
BinaryTree<T> node = toDo.pop ();
if (node.right () != null)
toDo.push (node.right ());
if (node.left () != null)
toDo.push (node.left ());
return node.label ();
}
public void remove () {
throw new UnsupportedOperationException ();
}
}
Figure 5.9: An iterator for preorder binary-tree traversal using a stack to keep track
of the recursive structure.
5.4.3 Iterators on trees
Recursion ts tree data structures perfectly, since they are themselves recursively
dened data structures. The task of providing a non-recursive traversal of a tree
using an iterator, on the other hand, is rather more troublesome than was the case
for sequences.
One possible approach is to use a stack and simply transform the recursive
structure of a traversal in the same manner we showed for the findExit procedure
in 4.4.1. We might get an iterator like that for BinaryTrees shown in Figure 5.9.
Another alternative is to use a tree data structure with parent links, as shown
for binary trees in Figure 5.10. As you can see, this implementation keeps track of
the next node to be visited (in postorder) in the eld next. It nds the node to visit
after next by looking at the parent, and deciding what to do based on whether next
is the left or right child of its parent. Since this iterator does postorder traversal,
the node after next is nexts parent if next is a right child, and otherwise it is the
5.4. TREE TRAVERSALS. 105
deepest, leftmost descendent of the right child of the parent.
Exercises
5.1. Implement an Iterator that enumerates the labels of a trees nodes in inorder,
using a stack as in Figure 5.9.
5.2. Implement an Iterator that enumerates the labels of a trees nodes in inorder,
using parent links as in Figure 5.10.
5.3. Implement a preorder Iterator that operates on the general type Tree (rather
than BinaryTree).
106 CHAPTER 5. TREES
import java.util.Stack;
public class PostorderIterator<T> implements Iterator<T> {
private BinaryTree<T> next;
/** An Iterator that returns the labels of TREE in
* postorder. */
public PostorderIterator (BinaryTree<T> tree) {
next = tree;
while (next != null && next.left () != null)
next = next.left ();
}
public boolean hasNext () {
return next != null;
}
public T next () {
if (next == null)
throw new NoSuchElementException ();
T result = next.label ();
BinaryTree<T> p = next.parent ();
if (p.right == next)
// Have just finished with the right child of p.
next = p;
else {
next = p.right ();
while (next != null && next.left () != null)
next = next.left ();
}
return result;
}
public void remove () {
throw new UnsupportedOperationException ();
}
}
Figure 5.10: An iterator for postorder binary-tree traversal using a parent links in
the tree to keep track of the recursive structure.
Chapter 6
Search Trees
A rather important use of trees is in searching. The task is to nd out whether
some target value is present in a data structure that represents a set of data, and
possibly to return some auxiliary information associated with that value. In all
these searches, we perform a number of steps until we either nd the value were
looking for, or exhaust the possibilities. At each step, we eliminate some part of the
remaining set from further consideration. In the case of linear searches (see 1.3.1),
we eliminate one item at each step. In the case of binary searches (see 1.3.4), we
eliminate half the remaining data at each step.
The problem with binary search is that the set of search items is dicult to
change; adding a new item, unless it is larger than all existing data, requires that
we move some portion of the array over to make room for the new item. The worst-
case cost of this operation rises proportionately with the size of the set. Changing
the array to a list solves the insertion problem, but the crucial operation of a binary
searchnding the middle of a section of the array, becomes expensive.
Enter the tree. Lets suppose that we have a set of data values, that we can
extract from each data value a key, and that the set of possible keys is totally
orderedthat is, we can always say that one key is either less than, greater than,
or equal to another. What these mean exactly depends on the kind of data, but
the terms are supposed to be suggestive. We can approximate binary search by
having these data values serve as the labels of a binary search tree (or BST), which
is dened to be binary tree having the following property:
Binary-Search-Tree Property. For every node, x, of the tree, all
nodes in the left subtree of x have keys that are less than or equal to
the key of x and all nodes in the right subtree of x have keys that are
greater than or equal to the key of x. 2
Figure 6.1a is an example of a typical BST. In that example, the labels are integers,
the keys are the same as the labels, and the terms less than, greater than, and
equal to have their usual meanings.
The keys dont have to be integers. In general, we can organize a set of values
into a BST using any total ordering on the keys. A total ordering, lets call it _,
has the following properties:
107
108 CHAPTER 6. SEARCH TREES
Completeness: For any values x and y, either x _ y or y _ x, or both;
Transitivity: If x _ y and y _ z, then x _ z, and
Anti-symmetry: If x _ y and y _ x, then x = y.
For example, the keys can be integers, and greater than, etc., can have their usual
meanings. Or the data and keys can be strings, with the ordering being dictionary
order. Or the data can be pairs, (a, b), and the keys can be the rst items of the
pairs. A dictionary is like thatit is ordered by the words being dened, regardless
of their meanings. This last order is an example where one might expect to have
several distinct items in the search tree with equal keys.
42
19
16 25
30
60
50 91
(a)
16
19
25
...
(b)
Figure 6.1: Two binary search trees. Tree (b) is right-leaning linear tree.
An important property of BSTs, which follows immediately from their denition,
is that traversing a BST in inorder visits its nodes in ascending order of their labels.
This leads to a simple algorithm for sorting known as treesort.
/** Permute the elements of A into non-decreasing order. Assumes
* the elements of A have an order on them. */
static void sort(SomeType[] A) {
int i;
BST T;
T = null;
for (i = 0; i < A.length; i += 1) {
insert A[i] into search tree T.
}
i = 0;
traverse T in inorder, where visiting a node, Q, means
A[i] = Q.label(); i += 1;
}
6.1. OPERATIONS ON A BST 109
The array contains elements of type SomeType, by which I intend to denote a type
that has a less-than and equals operators on it, as required by the denition of a
BST.
6.1 Operations on a BST
A BST is simply a binary tree, and therefore we can use the representation from
5.2, giving the class in Figure 6.2. For now, I will use the type int for labels, and
well assume that labels are the same as keys.
Since it is possible to have more than one instance of a label in this particular
version of binary search tree, I have to specify carefully what it means to remove that
label or to nd a node that contains it. I have chosen here to choose the highest
node containing the labelthe one nearest the root. [Why will this always be
unique? That is, why cant there be two highest nodes containing a label, equally
near the root?]
One problematic feature of this particular BST denition is that the data struc-
ture is relatively unprotected. As the comment on insert indicates, it is possible to
break a BST by inserting something injudicious into one of its children, as with
BST.insert(T.left(),42), when T.label() is 20. When we incorporate the rep-
resentation into a full-edged implementation of SortedSet (see 6.2), well protect
it against such abuse.
6.1.1 Searching a BST
Searching a BST is very similar to binary search in an array, with the root of the
tree corresponding to the middle of the array.
/** The highest node in this tree that contains the
* label L, or null if there is none. */
public static BST find(BST T, int L)
{
if (T == null || L == T.label)
return T;
else if (L < T.label)
return find(T.left, L);
else return find(T.right, L);
}
6.1.2 Inserting into a BST
As promised, the advantage of using a tree is that it is relatively cheap to add things
to it, as in the following routine.
110 CHAPTER 6. SEARCH TREES
/** A binary search tree. */
class BST {
protected int label;
protected BST left, right;
/** A leaf node with given LABEL */
public BST(int label) { this(label, null, null); }
/** Fetch the label of this node. */
public int label();
/** Fetch the left (right) child of this. */
public BST left() ...
public BST right() ...
/** The highest node in T that contains the
* label L, or null if there is none. */
public static BST find(BST T, int L) ...
/** True iff label L is in T. */
public static boolean isIn(BST T, int L)
{ return find (T, L) != null; }
/** Insert the label L into T, returning the modified tree.
* The nodes of the original tree may be modified. If
* T is a subtree of a larger BST, T, then insertion into
* T will render T invalid due to violation of the binary-
* search-tree property if L > T.label() and T is in
* T.left() or L < T.label() and T is in T.right(). */
public static BST insert(BST T, int L) ...
/** Delete the instance of label L from T that is closest to
* to the root and return the modified tree. The nodes of
* the original tree may be modified. */
public static BST remove(BST T, int L) ...
/* This constructor is private to force all BST creation
* to be done by the insert method. */
private BST(int label, BST left, BST right) {
this.label = label; this.left = left; this.right = right;
}
}
Figure 6.2: A BST representation.
6.1. OPERATIONS ON A BST 111
/** Insert the label L into T, returning the modified tree.
* The nodes of the original tree may be modified.... */
static BST insert(BST T, int L)
{
if (T == null)
return new BST (L, null, null);
if (L < T.label)
T.left = insert(T.left, L);
else
T.right = insert(T.right, L);
return this;
}
Because of the particular way that I have written this, when I insert multiple copies
of a value into the tree, they always go to the right of all existing copies. I will
preserve this property in the delete operation.
6.1.3 Deleting items from a BST.
Deletion is quite a bit more complex, since when one removes an internal node, one
cant just let its children fall o, but must re-attach them somewhere in the tree.
Obviously, deletion of an external node is easy; just replace it with the null tree
(see Figure 6.3(a)). Its also easy to remove an internal node that is missing one
childjust have the other child commit patricide and move up (Figure 6.3(b)).
When neither child is empty, we can nd the successor of the node we want to
removethe rst node in the right tree, when it is traversed in inorder. Now that
node will contain the smallest key in the right subtree. Furthermore, because it is
the rst node in inorder, its left child will be null [why?]. Therefore, we can replace
that node with its right child and move its key to the node we are removing, as
shown in Figure 6.3(c).
42
19
16 25
60
50 91
42
19
16 30
60
50 91
50
19
16 25
30
60
91
remove 30 remove 25 remove 42
Figure 6.3: Three possible deletions, each starting from the tree in Figure 6.1.
A possible set of subprograms for deletion from a BST appears in Figure 6.4.
The auxiliary routine swapSmallest is an additional method private to BST, and
dened as follows.
112 CHAPTER 6. SEARCH TREES
/** Delete the instance of label L from T that is closest to
* to the root and return the modified tree. The nodes of
* the original tree may be modified. */
public static BST remove(BST T, int L) {
if (T == null)
return null;
if (L < T.label)
T.left = remove(T.left, L);
else if (L > T.label)
T.right = remove(T.right, L);
// Otherwise, weve found L
else if (T.left == null)
return T.right;
else if (T.right == null)
return T.left;
else
T.right = swapSmallest(T.right, T);
return T;
}
/** Move the label from the first node in T (in an inorder
* traversal) to node R (over-writing the current label of R),
* remove the first node of T from T, and return the resulting tree.
*/
private static BST swapSmallest(BST T, BST R) {
if (T.left == null) {
R.label = T.label;
return T.right;
} else {
T.left = swapSmallest(T.left, R);
return T;
}
}
Figure 6.4: Removing items from a BST without parent pointers.
6.2. IMPLEMENTING THE SORTEDSET INTERFACE 113
static BST insert(BST T, int L) {
BST newNode;
if (T == null)
return new BST (L, null, null);
if (L < T.label)
T.left = newNode = insert(T.left, L);
else
T.right = newNode = insert(T.right, L);
newNode.parent = T;
return this;
}
Figure 6.5: Insertion into a BST that has parent pointers.
6.1.4 Operations with parent pointers
If we revise the BST class to provide a parent operation, and add a corresponding
parent eld to the representation, the operations become more complex, but provide
a bit more exibility. It is probably wise not to provide a setParent operation for
BST, since it is particularly easy to destroy the binary-search-tree property with this
operation, and a client of BST would be unlikely to need it in any case, given the
existence of insert and remove operations.
The operation find operation is unaected, since it ignores parent nodes. When
inserting in a BST, on the other hand, life is complicated by the fact that insert
must set the parent of any node inserted. Figure 6.5 shows one way. Finally, removal
from a BST with parent pointersshown in Figure 6.6is trickiest of all, as usual.
6.1.5 Degeneracy strikes
Unfortunately, all is not roses. The tree in Figure 6.1(b) is the result of inserting
nodes into a tree in ascending order (obviously, the same tree can result from appro-
priate deletions from a larger tree as well). You should be able to see that doing a
search or insertion on this tree is just like doing a search or insertion on a linked list;
it is a linked list, but with extra pointers in each element that are always null. This
tree is not balanced: it contains subtrees in which left and right children have much
dierent heights. We will return to this question in Chapter 9, after developing a
bit more machinery.
6.2 Implementing the SortedSet interface
The standard Java library interface SortedSet (see 2.2.4) provides a kind of
Collection that supports range queries. That is, a program can use the interface
to nd all items in a collection that are within a certain range of values, according
to some ordering relation. Searching for a single specic value is simply a special
case in which the range contains just one value. It is fairly easy to implement this
114 CHAPTER 6. SEARCH TREES
/** Delete the instance of label L from T that is closest to
* to the root and return the modified tree. The nodes of
* the original tree may be modified. */
public static BST remove(BST T, int L) {
if (T == null)
return null;
BST newChild;
newChild = null; result = T;
if (L < T.label)
T.left = newChild = remove(T.left, L);
else if (L > T.label)
T.right = newChild = remove(T.right, L);
// Otherwise, weve found L
else if (T.left == null)
return T.right;
else if (T.right == null)
return T.left;
else
T.right = newChild = swapSmallest(T.right, T);
if (newChild != null)
newChild.parent = T;
return T;
}
private static BST swapSmallest(BST T, BST R) {
if (T.left == null) {
R.label = T.label;
return T.right;
} else {
T.left = swapSmallest(T.left, R);
if (T.left != null)
T.left.parent = T;
return T;
}
}
Figure 6.6: Removing items from a BST with parent pointers.
6.3. ORTHOGONAL RANGE QUERIES 115
interface using a binary search tree as the representation; well call the result a
BSTSet.
Lets plan ahead a little. Among the operations well have to support are
headSet, tailSet, and subSet, which return views of some underlying set that con-
sist of a subrange of that set. The values returned will be full-edged SortedSets
in their own right, modications to which are supposed to modify the underlying
set as well (and vice-versa). Since a full-edged set can also be thought of as a view
of a range in which the bounds are innitely small to innitely large, we might
look for a representation that supports both sets created fresh from a constructor,
and those that are views of other sets. This suggests a representation for our set
that contains a pointer to the root of a BST, and two bounds indicating the largest
and smallest members of the set, with null indicating a missing bound.
We make the root of the BST a (permanent) sentinel node for an important
reason. We will use the same tree for all views of the set. If our representation
simply pointed at a root of the tree that contained data, then this pointer would
have to change whenever that node of the tree was removed. But then, we would
have to make sure to update the root pointer in all other views of the set as well,
since they are also supposed to reect changes in the set. By introducing the sentinel
node, shared by all views and never deleted, we make the problem of keeping them
all up to date trivial. This is a typical example of the old computer-science maxim:
Most technical problems can be solved by introducing another level of indirection.
Assuming we use parent pointers, an iterator through a set can consist of a
pointer to the next node whose label is to be returned, a pointer to the last node
whose label was returned (for implementing remove) and a pointer to the BSTSet
being iterated over (conveniently provided in Java by making the iterator an inner
class). The iterator will proceed in inorder, skipping over portions of the tree that
are outside the bounds on the set. See also Exercise 5.2 concerning iterating using
a parent pointer.
Figure 6.8 illustrates a BSTSet, showing the major elements of the representa-
tion: the original set, the BST that contains its data, a view of the same set, and
an iterator over this view. The sets all contain space for a Comparator (see 2.2.4)
to allow the user of the set to specify an ordering; in Figure 6.8, we use the natu-
ral ordering, which on strings gives us lexicographical order. Figure 6.7 contains a
sketch of the corresponding Java declarations for the representation.
6.3 Orthogonal Range Queries
Binary search trees divide data (ideally) into halves, using a linear ordering on the
data. The divide-and-conquer idea, however, does not require that the factor be two.
Suppose we are dealing with keys that have more structure. For example, consider a
collection of items that have locations on, say, some two-dimensional area. In some
cases, we may wish to nd items in this collection based on their location; their
keys are their locations. While it is possible to impose a linear ordering on such
keys, it is not terribly useful. For example, we could use a lexicographic ordering,
and dene (x
0
, y
0
) > (x
1
, y
1
) i x
0
> x
1
or x
0
= x
1
and y
0
> y
1
. However, with
116 CHAPTER 6. SEARCH TREES
public class BSTSet<T> extends AbstractSet<T> {
/** The empty set, using COMP as the ordering. */
public BSTSet (Comparator<T> comp) {
comparator = comp;
low = high = null;
sent = new BST ();
}
/** The empty set, using natural ordering. */
public BSTSet () { this (null); }
/** The set initialized to the contents of C, with natural order. */
public BSTSet (Collection<? extends T> c) { addAll (c); }
/** The set initialized to the contents of S, same ordering. */
public BSTSet (SortedSet<? extends T> s) {
this (s.comparator()); addAll (c);
}

/** Value of comparator(); null if naturally ordered. */
private Comparator<T> comp;
/** Bounds on elements in this class, null if no bounds. */
private T low, high;
/** Sentinel of BST containing data. */
private final BST<T> sent;
Figure 6.7: Java representation for BSTSet class, showing only constructors and
instance variables.
6.3. ORTHOGONAL RANGE QUERIES 117
/** Used internally to form views. */
private BSTSet (BSTSet<T> set, T low, T high) {
comparator = set.comparator ();
this.low = low; this.high = high;
this.sent = set.sent;
}
/** An iterator over BSTSet. */
private class BSTIter<T> implements Iterator<T> {
/** Next node in iteration to yield. Equals the sentinel node
* when done. */
BST<T> next;
/** Node last returned by next(), or null if none, or if remove()
* has intervened. */
BST<T> last;
BSTIter () {
last = null;
next = rst node that is in bounds, or sent if none;
}

}
/** A node in the BST */
private static class BST<T> {
T label;
BST<T> left, right, parent;
/** A sentinel node */
BST () { label = null; parent = null; }
BST (T label, BST<T> left, BST<T> right) {
this.label = label; this.left = left; this.right = right;
}
}
}
Figure 6.7, continued: Private nested classes used in implementation
118 CHAPTER 6. SEARCH TREES

sentinel
hartebeest
dog
axolotl elk
duck
elephant
gnu
BSTSet.this:
last:
next:
fauna:
subset:
I:
Figure 6.8: A BSTSet, fauna, a view, subset, formed from fauna.subSet("dog",
"gnu"), and an iterator, I, over subset. The BST part of the representation is
shared between fauna and subset. Triangles represent whole subtrees, and rounded
rectangles represent individual nodes. Each set contains a pointer to the root of the
BST (a sentinel node, whose label is considered larger than any value in the tree), plus
lower and upper bounds on the values (null means unbounded), and a Comparator
(in this case, null, indicating natural order). The iterator contains a pointer to
subset, which it is iterating over, a pointer (next) to the node containing the next
label in sequence (duck) and another pointer (last) to the node containing the
label in the sequence that was last delivered by I.next(). The dashed regions of
the BST are skipped entirely by the iterator. The hartebeest node is not returned
by the iterator, but the iterator does have to pass through it to get down to the
nodes it does return.
6.4. PRIORITY QUEUES AND HEAPS 119
that denition, the set of all objects between points A and B consists of all those
objects whose horizontal position lies between those of A and B, but whose vertical
position is arbitrary (a long vertical strip). Half the information is unused.
The term quadtree (or quad tree) refers to a class of search tree structure that
better exploits two-dimensional location data. Each step of a search divides the
remaining data into four groups, one for each of four quadrants of a rectangle about
some interior point. This interior dividing point can be the center (so that the
quadrants are equal) giving a PR quadtree (also called a point-region quadtree or
just region quadtree), or it can be one of the points that is stored in the tree, giving
a point quadtree.
Figure 6.9 illustrates the idea behind the two types of quadtree. Each node of
the tree corresponds to a rectangular region (possibly innite in the case of point
quadtrees). Any region may be subdivided into four rectangular subregions to
the northwest, northeast, southeast, and southwest of some interior dividing point.
These subregions are represented by children of the tree node that corresponds
to the dividing point. For PR quadtrees, these dividing points are the centers of
rectangles, while for point quadtrees, they are selected from the data points, just
as the dividing values in a binary search tree are selected from the data stored in
the tree.
6.4 Priority queues and heaps
Suppose that we are faced with a dierent problem. Instead of being able to search
quickly for the presence of any element in a set, let us restrict ourselves to searching
for the largest (by ipping everything in the following discussion around in the
obvious way, we can search for smallest elements instead). Finding the largest in a
BST is reasonably easy [how?], but we still have to deal with the imbalance problem
described above. By restricting ourselves to the operations of inserting an element,
and nding and deleting the largest element, we can avoid the balancing problem
easily. A data structure supporting just those operations is called a priority queue,
because we remove items from it in the order of their values, regardless of arrival
order.
In Java, we could simply make a class that implements SortedSet and that was
particularly fast at the operations first and remove(x), when x happens to be the
rst element of the set. But of course, the user of such a class might be surprised to
nd how slow it is to iterate through an entire set. Therefore, we might specialize
a bit, as shown in Figure 6.10.
A convenient data structure for representing priority queues is the heap (not to
be confused with the large area of storage from which new allocates memory, an
unfortunate but traditional clash of nomenclature). A heap is simply a positional
tree (usually binary) satisfying the following property.
Heap Property. The label at any node in the tree is greater than or
equal to the label of any descendant of that node.
120 CHAPTER 6. SEARCH TREES
-100
-75
-50
0
100
-100 0 25 50 100
A
B
C
D
E F
G
0
A B
G F
E D C
200
100
50
25
-100
100
-100 100

G
D
B G E F
A C
Figure 6.9: Illustration of two kinds of quadtree for the same set of data. On top
is a four-level PR quadtree, using square regions for simplicity. Below is a cor-
responding point quadtree (there are many, depending on which points are used
to divide the data). In each, the left diagram shows the geometry; the dots repre-
sent the positionsthe keysof seven data items at (40, 30), (30, 10), (20, 90),
(30, 60), (10, 70), (70, 70), and (80, 20). On the right, we see the correspond-
ing tree data structures. For the PR quadtree, each level of the tree contains nodes
that represent squares with the same size of edge (shown at the right). For the
point quadtree, each point is the root of a subtree that divides a rectangular region
into four, generally unequal, regions. The four children of each node represent the
upper-left, upper-right, lower-left, and lower-right quadrants of their common par-
ent node, respectively. To simplify the drawing, we have not shown the children of
a node when they are all empty.
6.4. PRIORITY QUEUES AND HEAPS 121
interface PriorityQueue<T extends Comparable<T>> {
/** Insert item L into this queue. */
public void insert(T L);
/** True iff this queue is empty. */
public boolean isEmpty();
/** The largest element in this queue. Assumes !isEmpty(). */
public T first();
/** Remove and return an instance of the largest element (there may
* be more than one; removes only one). Assumes !isEmpty(). */
public T removeFirst();
}
Figure 6.10: A possible interface to priority queues.
Since the order of the children is immaterial, there is more freedom in how to arrange
the values in the heap, making it easy to keep a heap bushy. Accordingly, when we
use the unqualied term heap in this context, we will mean a complete tree with
the heap property. This speeds up all the manipulations of heaps, since the time
required to do insertions and deletions is proportional to the height of the heap.
Figure 6.11 illustrates a typical heap.
Implementing the operation of nding the largest value is obviously easy. To
delete the largest element, while keeping both the heap property and the bushiness
of the tree, we rst move the last item on the bottom level of the heap to the
root of the tree, replacing and deleting the largest element, and then reheapify to
re-establish the heap property. Figure 6.11bd illustrates the process. It is typical
to do this with a binary tree represented as an array, as in the class BinaryTree2
of 5.3. Figure 6.12 gives a possible implementation.
By repeatedly nding the largest element, of course, we can sort an arbitrary
set of objects:
/** Sort the elements of A in ascending order. */
static void heapSort(Integer[] A) {
if (A.length <= 1)
return;
Heap<Integer> H = new Heap<Integer>(A.length);
H.setHeap(A, 0, A.length);
for (int i = A.length-1; i >= 0; i -= 1)
A[i] = H.removeFirst();
}
The process is illustrated in Figure 6.13.
122 CHAPTER 6. SEARCH TREES
2
60
30 42
5
4
(b)
60
2
30 42
5
4
(c)
60
42
30 2
5
4
(d)
91
60
30 42
5
4 2
(a)
Figure 6.11: Illustrative heap (a). The sequence (b)(d) shows steps in the deletion
of the largest item. The last (bottommost, rightmost) label is rst moved up to
overwrite that of the root. It is then sifted down until the heap property is
restored. The shaded nodes show where the heap property is violated during the
process.
6.4. PRIORITY QUEUES AND HEAPS 123
class Heap<T extends Comparable<T>>
extends BinaryTree2<T> implements PriorityQueue<T> {
/** A heap containing up to N > 0 elements. */
public Heap (int N) { super(N); }
/** The minimum label value (written ). */
static final int MIN = Integer.MIN_VALUE;
/** Insert item L into this queue. */
public void insert(T L) {
extend(L);
reHeapifyUp(currentSize()-1);
}
/** True iff this queue is empty. */
public boolean isEmpty() { return currentSize() == 0; }
/** The largest element in this queue. Assumes !isEmpty(). */
public int first() { return label(0); }
/** Remove and return an instance of the largest element (there may
* be more than one; removes only one). Assumes !isEmpty(). */
public T removeFirst() {
int result = label(0);
setLabel(0, label(currentSize()-1));
size -= 1;
reHeapifyDown(0);
return result;
}
Figure 6.12: Implementation of a common kind of priority queue: the heap.
124 CHAPTER 6. SEARCH TREES
/** Restore the heap property in this tree, assuming that only
* NODE may have a label larger than that of its parent. */
protected void reHeapifyUp(int node) {
if (node <= 0)
return;
T x = label(node);
while (node != 0 && label(parent(node)).compareTo (x) < 0) {
setLabel(node, label(parent(node)));
node = parent(node);
}
setLabel(node, x);
}
/** Restore the heap property in this tree, assuming that only
* NODE may have a label smaller than those of its children. */
protected void reHeapifyDown(int node) {
T x = label(node);
while (true) {
if (left(node) >= currentSize())
break;
int largerChild =
(right(node) >= currentSize()
|| label(right(node)).compareTo (label(left(node))) <= 0)
? left(node) : right(node);
if (x >= label(largerChild))
break;
setLabel(node, label(largerChild));
node = largerChild;
}
setLabel(node, x);
}
/** Set the labels in this Heap to A[off], A[off+1], ...
* A[off+len-1]. Assumes that LEN <= maxSize(). */
public void setHeap(T[] A, int off, int len) {
for (int i = 0; i < len; i += 1)
setLabel(i, A[off+i]);
size = len;
heapify();
}
/** Turn label(0)..label(size-1) into a proper heap. */
protected void heapify() { ... }
}
Figure 6.12, continued.
6.4. PRIORITY QUEUES AND HEAPS 125
(a) 19 0 -1 7 23 2 42
(b) 42 23 19 7 0 2 -1
(c) 23 7 19 -1 0 2 42
(d) 19 7 2 -1 0 23 42
(e) 7 0 2 -1 19 23 42
(f) 2 0 -1 7 19 23 42
(g) 0 -1 2 7 19 23 42
(h) -1 0 2 7 19 23 42
Figure 6.13: An example of heapsort. The original array is in (a); (b) is the result of
setHeap; (c)(h) are the results of successive iterations. Each shows the active part
of the heap array and the portion of the output array that has been set, separated
by a gap.
126 CHAPTER 6. SEARCH TREES
We could simply implement heapify like this:
protected void heapify()
{
for (int i = 1; i < size; i += 1)
reHeapifyUp(i);
}
Interestingly enough, however, this implementation is not quite as fast as it could
be, and it is faster to perform the operation by a dierent method, in which we
work from the leaves back up. That is, in reverse level order, we swap each node
with its parent, if it is larger, and then, as for reHeapifyDown, continue moving the
parents value down the tree until heapness is restored. It might seem that this is
no dierent from repeated insertion, but we will see later that it is.
protected void heapify()
{
for (int i = size/2-1; i >= 0; i -= 1)
reHeapifyDown(i);
}
6.5 Heapify Time
If we measure the time requirements for sorting N items with heapSort, we see
that it is the cost of heapifying N elements plus the time required to extract N
items. The worst-case cost of extracting N items from the heap, C
e
(N) is dominated
by the cost of reHeapifyDown, starting at the top node of the heap. If we count
comparisons of parent labels against child labels, you can see that the worst-case
cost here is proportional to the current height of the heap. Suppose the initial
height of the heap is k (and that N = 2
k+1
1). It stays that way until 2
k
items
have been extracted (removing the whole bottom row), and then becomes k 1.
It stays at k 1 for the next 2
k1
items, and so forth. Thus, the total time spent
extracting items is
C
e
(N) = C
e
(2
k+1
1) = 2
k
k + 2
k1
(k 1) +. . . + 2
0
0
If we write 2
k
k as 2
k
+. . . + 2
k

k
and re-arrange the terms, we get
C
e
(2
k+1
1) = 2
k
k + 2
k1
(k 1) +. . . + 2
0
0
= 2
1
+ 2
2
+ + 2
k1
+ 2
k
+ 2
2
+ + 2
k1
+ 2
k
+
.
.
.
+ 2
k1
+ 2
k
+ 2
k
= (2
k+1
2) + (2
k+1
4) + . . . + (2
k+1
2
k1
) + (2
k+1
2
k
)
= k2
k+1
(2
k+1
2)
(k2
k+1
) = (N lg N)
6.5. HEAPIFY TIME 127
Now lets consider the cost of heapifying N elements. If we do it by inserting the
N elements one by one and performing reHeapifyUp, then we get a cost like that
of the extracting N elements: For the rst insertion, we do 0 label comparisons; for
the next 2, we do 1; for the next 4, we do 2; etc, or
C
u
h
(2
k+1
1) = 2
0
0 + 2
1
1 +. . . + 2
k
k
where C
u
h
(N) is the worst-case cost of heapifying N elements by repeated reHeapifyUps.
This is the same as the one we just did, giving
C
u
h
(N) (N lg N)
But suppose we heapify by performing the second algorithm at the end of 6.4,
performing a reHeapifyDown on all the items of the array starting at item N/2|1
and going toward item 0. The cost of reHeapifyDown depends on the distance to
the deepest level. For the last 2
k
items in the heap, this cost is 0 (which is why we
skip them). For the preceding 2
k1
, the cost is 1, etc. This gives
C
d
h
(N) = C
d
h
(2
k+1
1) = 2
k1
1 + 2
k2
2 + + 2
0
k
Using the same trick as before,
C
d
h
(2
k+1
1) = 2
k1
1 + 2
k2
2 + + 2
0
k
= 2
0
+ 2
1
+ + 2
k2
+ 2
k1
+ 2
0
+ 2
1
+ + 2
k2
+
.
.
.
+ 2
0
= (2
k
1) + (2
k1
1) + + (2
1
1)
= 2
k+1
2 k
(N)
So this second heapication method runs considerably faster (asymptotically)
than the obvious repeated-insertion method. Of course, since the cost of extract-
ing N elements is still (N lg N) in the worst case, the overall worst-case cost of
heapsort is still (N lg N). However, this does lead you to expect that for big
enough N, there will be some constant-factor advantage to using the second form
of heapication, and thats an important practical consideration.
Exercises
6.1. Fill in a concrete implementation for the type QuadTree that has the following
constructor:
/** An initially empty quadtree that is restricted to contain points
* within the W x H rectangle whose center is at (X0,Y0). */
public QuadTree (double x0, double y0, double w, double h) ...
and no other constructors.
128 CHAPTER 6. SEARCH TREES
6.2. Fill in a concrete implementation for the type QuadTree that has the following
constructor:
/** An initially empty quadtree. */
public QuadTree () ...
and no other constructors. This problem is more dicult than the preceding exer-
cise, because there is no a priori limit on the boundaries of the entire region. While
you could simply use the maximum and minimum oating-point numbers for these
bounds, the result would in general be a wasteful tree structure with many useless
levels. Therefore, it makes sense to grow the region covered, as necessary, starting
from some arbitrary initial size.
6.3. Suppose that we introduce a new kind of removal operation for BSTs that
have parent pointers (see 6.1.4):
/** Delete the label T.label() from the BST containing T, assuming
* that the parent of T is not null. The nodes of the original tree
* will be modified. */
public static BST remove(BST T) { }
Give an implementation of this operation.
6.4. The implementation of BSTSet in 6.2 left out one detail: the implementation
of the size method. Indeed, the representation given in Figure 6.7 provides no way
to compute it other than to count the nodes in the underlying BST each time. Show
how to augment the representation of BSTSet or its nested classes as necessary so
as to allow a constant-time implementation of size. Remember that the size of any
view of a BSTSet might change when you change add or remove elements from any
other view of the same BSTSet.
6.5. Assume that we have a heap that is stored with the largest element at the
root. To print all elements of this heap that are greater than or equal to some key
X, we could perform the removeFirst operation repeatedly until we get something
less than X, but this would presumably take worst-case time (k lg N), where N
is the number of items in the heap and k is the number of items greater than or
equal to X. Furthermore, of course, it changes the heap. Show how to perform this
operation in (k) time without modifying the heap.
Chapter 7
Hashing
Sorted arrays and binary search trees all allow fast queries of the form is there
something larger (smaller) than X in here? Heaps allow the query what is the
largest item in here? Sometimes, however, we are interested in knowing only
whether some item is presentin other words, only in equality.
Consider again the isIn procedure from 1.3.1a linear search in a sorted array.
This algorithm requires an amount of time at least proportional to N, the number of
items stored in the array being searched. If we could reduce N, we would speed up
the algorithm. One way to reduce N is to divide the set of keys being searched into
some number, say M, of disjoint subsets and to then nd some fast way of choosing
the right subset. By dividing the keys more-or-less evenly among subsets, we can
reduce the time required to nd something to be proportional, on the average, to
N/M. This is what binary search does recursively (isInB from 1.3.4), with M = 2.
If we could go even further and choose a value for M that is comparable to N, then
the time required to nd a key becomes almost constant.
The problem is to nd a waypreferably fastof picking subsets (bins) in which
to put keys to be searched for. This method must be consistent, since whenever we
are asked to search for something, we must go to the subset we originally selected
for it. That is, there must be a functionknown as a hashing functionthat maps
keys to be searched for into the range of values 0 to M 1.
7.1 Chaining
Once we have this hashing function, we must also have a representation of the set
of bins. Perhaps the simplest scheme is to use linked lists to represent the bins, a
practice known as chaining in the hash-table literature. The standard Java library
class HashSet uses just such a strategy, illustrated in Figure 7.1. More usually, hash
tables appear as mappings, such as implementaions of the standard Java interface
java.util.Map. The representation is the same, except that the entries in the bins
carry not only keys, but also the additional information that is supposed to be
indexed by those keys. Figure 7.2 shows part of a possible implementation of the
standard Java class java.util.HashMap, which is itself an implementation of the
129
130 CHAPTER 7. HASHING
Map interface.
The HashMap class shown in Figure 7.2 uses the hashCode method dened for all
Java Objects to select a bin number for any key. If this hash function is a good one
the bins will receive roughly equal numbers of items (see 7.3 for more discussion).
We can decide on an a priori limit on the average number of items per bin,
and then grow the table whenever that limit is exceeded. This is the purpose of
the loadFactor eld and constructor argument. Its natural to ask whether we
might use a faster data structure (such as a binary search tree) for the bins.
However, if we really do choose reasonable values for the size of the tree, so that
each bin contains only a few items, that clearly wont gain us much. Growing the
bins array when it exceeds our chosen limit is like growing an ArrayList (4.1).
For good asymptotic time performance, we roughly double its size each time it
becomes necessary to grow the table. We have to remember, in addition, that the
bin numbers of most items will change, so that well have to move them.
7.2 Open-address hashing
In the Good Old Days, the overhead of all those link elds and the expense of
all those new operations led people to consider ways of avoiding linked lists for
representing the contents of bins. The open-addressing schemes put the entries
directly into the bins (one per bin). If a bin is already full, then subsequent entries
that have the same hash value overow into other, unused entries according to
some systematic scheme. As a result, the put operation from Figure 7.2 would look
something like this:
public Val put (Key key, Val value) {
int h = hash (key);
while (bins.get (h) != null && ! bins.get (h).key.equals (key))
h = nextProbe (h);
if (bins.get (h) == null) {
bins.add (new entry);
size += 1;
if ((float) size/bins.size () > loadFactor)
resize bins;
return null;
} else
return bins.get (h).setValue (value);
}
and get would be similarly modied.
The function nextProbe provides another value in the index range of bins for
use when it turns out that the table is already occupied at position h (a situation
known as a collision).
In the simplest case nextProbe(L) simply returns (h+1) % bins.size (), an
instance of what is called known linear probing. More generally, linear probing
7.2. OPEN-ADDRESS HASHING 131
0
1
2
3
4
5
6
7
8
9
10
0 22
23
26 81
5 82 38
83 39
84 40
63 -3
9 86
65
size:
bins:
loadFactor:
17
2.0
nums:
Figure 7.1: Illustration of a simple hash table with chaining, pointed to by the
variable nums. The table contains 11 bins, each containing a pointer to a linked list
of the items (if any) in that bin. This particular table represents the set
81, 22, 38, 26, 86, 82, 0, 23, 39, 65, 83, 40, 9, 3, 84, 63, 5.
The hash function is simply h(x) = x mod 11 on integer keys. (The mathematical
operation a mod b is dened to yield a ba/b| when b ,= 0. Therefore, it is always
non-negative if b > 0.) The current load factor in this set is 17/11 1.5, against a
maximum of 2.0 (the loadFactor eld), although as you can see, the bin sizes vary
from 0 to 3.
132 CHAPTER 7. HASHING
package java.util;
public class HashMap<Key,Val> extends AbstractMap<Key,Val> {
/** A new, empty mapping using a hash table that initially has
* INITIALBINS bins, and maintains a load factor <= LOADFACTOR. */
public HashMap (int initialBins, float loadFactor) {
if (initialBuckets < 1 || loadFactor <= 0.0)
throw new IllegalArgumentException ();
bins = new ArrayList<Entry<Key,Val>>(initialBuckets);
bins.addAll (Arrays.ncopies (initialBins, null));
size = 0; this.loadFactor = loadFactor;
}
/** An empty map with INITIALBINS initial bins and load factor 0.75. */
public HashMap (int initialBins) { this (initialBins, 0.75); }
/** An empty map with default initial bins and load factor 0.75. */
public HashMap () { this (127, 0.75); }
/** A mapping that is a copy of M. */
public HashMap (Map<Key,Val> M) { this (M.size (), 0.75); putAll (M); }
public T get (Object key) {
Entry e = find (key, bins.get (hash (key)));
return (e == null) ? null : e.value;
}
/** Cause get(KEY) == VALUE. Returns the previous get(KEY). */
public Val put (Key key, Val value) {
int h = hash (key);
Entry<Key,Val> e = find (key, bins.get (h));
if (e == null) {
bins.set (h, new Entry<Key,Val> (key, value, bins.get (h)));
size += 1;
if (size > bins.size () * loadFactor) grow ();
return null;
} else
return e.setValue (value);
}

Figure 7.2: Part of an implementation of class java.util.HashMap, a hash-table-
based implementation of the java.util.Map interface.
7.2. OPEN-ADDRESS HASHING 133
private static class Entry<K,V> implements Map.Entry<K,V> {
K key; V value;
Entry<K,V> next;
Entry (K key, V value, Entry<K,V> next)
{ this.key = key; this.value = value; this.next = next; }
public K getKey () { return key; }
public V getValue () { return value; }
public V setValue (V x)
{ V old = value; value = x; return old; }
public int hashCode () { see Figure 2.14 }
public boolean equals () { see Figure 2.14 }
}
private ArrayList<Entry<Key,Val>> bins;
private int size; /** Number of items currently stored */
private float loadFactor;
/** Increase number of bins. */
private void grow () {
HashMap<Key,Val> newMap
= new HashMap (primeAbove (bins.size ()*2), loadFactor);
newMap.putAll (this); copyFrom (newMap);
}
/** Return a value in the range 0 .. bins.size ()-1, based on
* the hash code of KEY. */
private int hash (Object key) {
return (key == null) ? 0
: (0x7fffffff & key.hashCode ()) % bins.size ();
}
/** Set THIS to the contents of S, destroying the previous
* contents of THIS, and invalidating S. */
private void copyFrom (HashMap<Key,Val> S)
{ size = S.size; bins = S.bins; loadFactor = S.loadFactor; }
/** The Entry in the list BIN whose key is KEY, or null if none. */
private Entry<Key,Val> find (Object key, Entry<Key,Val> bin) {
for (Entry<Key,Val> e = bin; e != null; e = e.next)
if (key == null && e.key == null || key.equals (e.key))
return e;
return null;
}
private int primeAbove (int N) { return a prime number N; }
}
Figure 7.2, continued: Private declarations for HashMap.
134 CHAPTER 7. HASHING
adds a positive constant that is relatively prime to the table size bins.size ()
[why relatively prime?]. If we take the 17 keys of Figure 7.1:
81, 22, 38, 26, 86, 82, 0, 23, 39, 65, 83, 40, 9, 3, 84, 63, 5.
and insert them in this order into an array of size 23 using linear probing with
increment 1 and x mod 23 as the hash function, the array of bins will contain the
following keys:
0
0
23
1
63
2
26
3 4
5
5 6 7 8
9
9 10 11
81
12
82
13
83
14
38
15
39
16
86
17
40
18
65
19
-3
20
84
21
22
22
As you can see, several keys are displaced from their natural positions. For example,
84 mod 23 = 15 and 63 mod 23 = 17.
There is a clustering phenomenon associated with linear probing. The problem
is simple to see with reference to the chaining method. If the sequence of entries
examined in searching for some key is, say, b
0
, b
1
, . . . , b
n
, and if any other key should
hash to one of these b
i
, then the sequence of entries examined in searching for it will
be part of the same sequence, b
i
, b
i+1
, . . . b
n
, even when the two keys have dierent
hash values. In eect, what would be two distinct lists under chaining are merged
together under linear probing, as much as doubling the eective average size of the
bins for those keys. The longest chain for our set of integers (see Figure 7.1) was
only 3 long. In the open-address example above, the longest chain is 9 items long
(look at 63), even though only one other key (40) has the same hash value.
By having nextProbe increment the value by dierent amounts, depending on
the original keya technique known as double hashingwe can ameliorate this
eect.
Deletion from an open-addressed hash table is non-trivial. Simply marking an
entry as unoccupied can break the chain of colliding entries, and delete more than
the desired item from the table [why?]. If deletion is necessary (often, it is not),
we have to be more subtle. The interested reader is referred to volume 3 of Knuth,
The Art of Computer Programming.
The problem with open-address schemes in general is that keys that would be in
separate bins under the chaining scheme can compete with each other. Under the
chaining scheme, if all entries are full and we search for an key that is not in the table,
the search requires only as many probes (i.e., tests for equality) as there are keys in
the table that have the same hashed value. Under any open-addressing scheme, it
would require N probes to nd that the key is not in the table. In my experience,
the cost of the extra link nodes required for chaining is relatively unimportant, and
for most purposes, I recommend using chaining rather than open-address schemes.
7.3 The hash function
This leaves the question of what to use for the function hash, used to choose the
bin in which to place a key. In order for the map or set we are implementing to
work properly, it is rst important that our hash function satisfy two constraints:
7.3. THE HASH FUNCTION 135
1. For any key value, K, the value of hash(K) must remain constant while K is
in the table (or the table must be reconstructed if hash(K) changes). during
the execution of the program.
2. If two keys are equal (according to the equals method, or whatever equality
test the hash table is using), then their hash values must be equal.
If either condition is violated, a key can eectively disappear from the table. On
the other hand, it is not generally necessary for the value of hash to be constant
from one execution of a program to the next, nor is it necessary that unequal keys
have unequal hash values (although performance will clearly suer if too many keys
have the same hash value).
If the keys are simply non-negative integers, a simple and eective function is
to use the remainder modulo the table size:
hash(X) == X % bins.size ();
For integers that might be negative, we have to make some provision. For example
hash(X) = (X & 0x7fffffff) % bins.size ();
has the eect of adding 2
31
to any negative value of X rst [why?]. Alternatively, if
bins.size () is odd, then
hash(X) = X % ((bins.size ()+1) / 2) + bins.size ()/2;
will also work [why?].
Handling non-numeric key values requires a bit more work. All Java objects
have dened on them a hashCode method that we have used to convert Objects
into integers (whence we can apply the procedure on integers above). The de-
fault implementation of x.equals(y) on Object is x==ythat is, that x and y
are references to the same object. Correspondingly, the default implementation of
x.hashCode() supplied by Object simply returns an integer value that is derived
from the address of the object pointed to by xthat is, by the pointer value x
treated as an integer (which is all it really is, behind the scenes). This default
implementation is not suitable for cases where we want to consider two dierent
objects to be the same. For example, the two Strings computed by
String s1 = "Hello, world!", s2 = "Hello," + " " + "world!";
will have the property that s1.equals (s2), but s1 != s2 (that is, they are two
dierent String objects that happen to contain the same sequence of characters).
Hence, the default hashCode operation is not suitable for String, and therefore the
String class overrides the default denition with its own.
For converting to an index into bins, we used the remainder operation. This
obviously produces a number in range; what is not so obvious is why we chose the
table sizes we did (primes not close to a power of 2). Suce it to say that other
choices of size tend to produce unfortunate results. For example, using a power of
2 means that the high-order bits of X.hashCode() get ignored.
136 CHAPTER 7. HASHING
If keys are not simple integers (strings, for example), a workable strategy is to
rst mash them into integers and then apply the remaindering method above. Here
is a representative string-hashing function that does well empirically, taken from a
C compiler by P. J. Weinberger
1
. It assumes 8-bit characters and 32-bit ints.
static int hash(String S)
{
int h;
h = 0;
for (int p = 0; p < S.length (); p += 1) {
h = (h << 4) + S.charAt(p);
h = (h ^ ((h & 0xf0000000) >> 24)) & 0x0fffffff;
}
return h;
}
The Java String type has a dierent function for hashCode, which computes

0i<n
c
i
31
ni1
using modular int arithmetic to get a result in the range 2
31
to 2
31
1. Here, c
i
denotes the i
th
character in the clsString.
7.4 Performance
Assuming the keys are evenly distributed, a hash table will do retrieval in constant
time, regardless of N, the number of items contained. As indicated in the analysis
we did in 4.1 about growing ArrayLists, insertion also has constant amortized
cost (i.e., cost averaged over all insertions). Of course, if the keys are not evenly
distributed, then we can see (N) cost.
If there is a possibility that one hash function will sometimes have bad clustering
problems, a technique known as universal hashing can help. Here, you choose a hash
function at random from some carefully chosen set. On average over all runs of your
program, your hash function will then perform well.
Exercises
7.1. Give an implementation for the iterator function over the HashMap repre-
sentation given in 7.1, and the Iterator class it needs. Since we have chosen
rather simple linked lists, you will have to use care in getting the remove operation
right.
1
The version here is adapted from Aho, Sethi, and Ullman, Compilers: Principles, Techniques,
and Tools, Addison-Wesley, 1986, p. 436.
Chapter 8
Sorting and Selecting
At least at one time, most CPU time and I/O bandwidth was spent sorting (these
days, I suspect more may be spent rendering MPEG les). As a result, sorting has
been the subject of extensive study and writing. We will hardly scratch the surface
here.
8.1 Basic concepts
The purpose of any sort is to permute some set of items that well call records so
that they are sorted according to some ordering relation. In general, the ordering
relation looks at only part of each record, the key. The records may be sorted
according to more than one key, in which case we refer to the primary key and to
secondary keys. This distinction is actually realized in the ordering function: record
A comes before B i either As primary key comes before Bs, or their primary keys
are equal and As secondary key comes before Bs. One can extend this denition
in an obvious way to hierarchies of multiple keys. For the purposes of this book, Ill
usually assume that records are of some type Record and that there is an ordering
relation on the records we are sorting. Ill write before(A, B) to mean that the
key of A comes before that of B in whatever order we are using.
Although conceptually we move around the records we are sorting so as to put
them in order, in fact these records may be rather large. Therefore, it is often
preferable to keep around pointers to the records and exchange those instead. If
necessary, the real data can be physically re-arranged as a last step. In Java, this
is very easy of course, since large data items are always referred to by pointers.
Stability. A sort is called stable if it preserves the original order of records that
have equal keys. Any sort can be made stable by (in eect) adding the original
record position as a nal secondary key, so that the list of keys (Bob, Mary, Bob,
Robert) becomes something like (Bob.1, Mary.2, Bob.3, Robert.4).
Inversions. For some analyses, we need to have an idea of how out-of-order a
given sequence of keys is. One useful measure is the number of inversions in the
137
138 CHAPTER 8. SORTING AND SELECTING
sequencein a sequence of keys k
0
, . . . , k
N1
, this is the number of pairs of integers,
(i, j), such that i < j and k
i
> k
j
. For example, there are two inversions in the
sequence of words
Charlie, Alpha, Bravo
and three inversions in
Charlie, Bravo, Alpha.
When the keys are already in order, the number of inversions is 0, and when they
are in reverse order, so that every pair of keys is in the wrong order, the number of
inversions is N(N 1)/2, which is the number of pairs of keys. When all keys are
originally within some distance D of their correct positions in the sorted permuta-
tion, we can establish a pessimistic upper bound of DN inversions in the original
permutation.
Internal vs. external sorting. A sort that is carried out entirely in primary
memory is known as an internal sort. Those that involve auxiliary disks (or, in
the old days especially, tapes) to hold intermediate results are called external sorts.
The sources of input and output are irrelevant to this classication (one can have
internal sorts on data that comes from an external le; its just the intermediate
les that matter).
8.2 A Little Notation
Many of the algorithms in these notes deal with (or can be thought of as dealing
with) arrays. In describing or commenting them, we sometimes need to make as-
sertions about the contents of these arrays. For this purpose, I am going to use a
notation used by David Gries to make descriptive comments about my arrays. The
notation
P
a b
denotes a section of an array whose elements are indexed from a to b and that
satises property P. It also asserts that a b + 1; if a > b, then the segment is
empty. I can also write
c
P
d
to describe an array segment in which items c +1 to d1 satisfy P, and that c < d.
By putting these segments together, I can describe an entire array. For example,
A :
ordered
0 i N
8.3. INSERTION SORTING 139
is true if the array A has N elements, elements 0 through i 1 are ordered, and
0 i N. A notation such as
P
j
denotes a 1-element array segment whose index is j and whose (single) value satises
P. Finally, Ill occasionally need to have simultaneous conditions on nested pieces
of an array. For example,
Q
0 i N
P

refers to an array segment in which items 0 to N 1 satisfy P, items 0 to i 1
satisfy Q, 0 N, and 0 i N.
8.3 Insertion sorting
One very simple sortand quite adequate for small applications, reallyis the
straight insertion sort. The name comes from the fact that at each stage, we insert
an as-yet-unprocessed record into a (sorted) list of the records processed so far, as
illustrated in Figure 8.2. The algorithm is shown in Figure 8.1.
A common way to measure the time required to do a sort is to count the com-
parisons of keys (for Figure 8.1, the calls to before). The total (worst-case) time
required by insertionSort is

0<i<N
C
IL
(i), where C
IL
(m) is the cost of the in-
ner (j) loop when i= m, and N is the size of A. Examination of the inner loop
shows that the number of comparisons required is equal to the number of records
numbered 0 to i-1 whose keys larger than that of x, plus one if there is at least
one smaller key. Since A[0..i-1] is sorted, it contains no inversions, and therefore,
the number of elements after X in the sorted part of A happens to be equal to the
number of inversions in the sequence A[0],..., A[i] (since X is A[i]). When X is
inserted correctly, there will be no inversions in the resulting sequence. It is fairly
easy to work out from that point that the running time of insertionSort, mea-
sured in key comparisons, is bounded by I +N 1, where I is the total number of
inversions in the original argument to insertionSort. Thus, the more sorted an
array is to begin with, the faster insertionSort runs.
8.4 Shells sort
The problem with insertion sort can be seen by examining the worst casewhere
the array is initially in reverse order. The keys are a great distance from their
nal resting places, and must be moved one slot at a time until they get there. If
keys could be moved great distances in little time, it might speed things up a bit.
140 CHAPTER 8. SORTING AND SELECTING
/** Permute the elements of A to be in ascending order. */
static void insertionSort(Record[] A) {
int N = A.length;
for (int i = 1; i < N; i += 1) {
/* A:
0 i N
ordered

*/
Record x = A[i];
int j;
for (j = i; j > 0 && before(x, A[j-1]); j -= 1) {
/* A:
0 j
> x
i N
ordered except at j

*/
A[j] = A[j-1];
}
/* A:
x
0 j
> x
i N
ordered except at j

*/
A[j] = x;
}
}
Figure 8.1: Program for performing insertion sort on an array. The before function
is assumed to embody the desired ordering relation.
8.4. SHELLS SORT 141
13 9 10 0 22 12 4
9 13 10 0 22 12 4
9 10 13 0 22 12 4
0 9 10 13 22 12 4
0 9 10 13 22 12 4
0 9 10 12 13 22 4
0 4 9 10 12 13 22
Figure 8.2: Example of insertion sort, showing the array before each call of
insertElement. The gap at each point separates the portion of the array known
to be sorted from the unprocessed portion.
142 CHAPTER 8. SORTING AND SELECTING
/** Permute the elements of KEYS, which must be distinct,
* into ascending order. */
static void distributionSort1(int[] keys) {
int N = keys.length;
int L = min(keys), U = max(keys);
java.util.BitSet b = new java.util.BitSet();
for (int i = 0; i < N; i += 1)
b.set(keys[i] - L);
for (int i = L, k = 0; i <= U; i += 1)
if (b.get(i-L)) {
keys[k] = i; k += 1;
}
}
Figure 8.3: Sorting distinct keys from a reasonably small and dense set. Here,
assume that the functions min and max return the minimum and maximum values
in an array. Their values are arbitrary if the arrays are empty.
This is the idea behind Shells sort
1
. We choose a diminishing sequence of strides,
s
0
> s
1
> . . . > s
m1
, typically choosing s
m1
= 1. Then, for each j, we divide the
N records into the s
j
interleaved sequences
R
0
, R
s
j
, R
2s
j
, . . . ,
R
1
, R
s
j
+1
, R
2s
j
+1
, . . .

R
s
j
1
, R
2s
j
1
, . . .
and sort each of these using insertion sort. Figure 8.4 illustrates the process with a
vector in reverse order (requiring a total of 49 comparisons as compared with 120
comparisons for straight insertion sort).
A good sequence of s
j
turns out to be s
j
= 2
mj
1|, where m = lg N|. With
this sequence, it can be shown that the number of comparisons required is O(N
1.5
),
which is considerably better than O(N
2
). Intuitively, the advantages of such a
sequencein which the successive s
j
are relatively primeis that on each pass, each
position of the vector participates in a sort with a new set of other positions. The
sorts get jumbled and get more of a chance to improve the number of inversions
for later passes.
1
Also known as shellsort. Knuths reference: Donald L. Shell, in the Communications of the
ACM 2 (July, 1959), pp. 3032.
8.4. SHELLS SORT 143
#I #C
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 120 -
0 14 13 12 11 10 9 8 7 6 5 4 3 2 1 15 91 1
0 7 6 5 4 3 2 1 14 13 12 11 10 9 8 15 42 9
0 1 3 2 4 6 5 7 8 10 9 11 13 12 14 15 4 20
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 19
Figure 8.4: An illustration of Shells sort, starting with a vector in reverse order.
The increments are 15, 7, 3, and 1. The column marked #I gives the number of
inversions remaining in the array, and the column marked #C gives the number
of key comparisons required to obtain each line from its predecessor. The arcs
underneath the arrays indicate which subsequences of elements are processed at
each stage.
144 CHAPTER 8. SORTING AND SELECTING
8.5 Distribution counting
When the range of keys is restricted, there are a number of optimizations possible. In
Column #1 of his book Programming Pearls
2
, Jon Bentley gives a simple algorithm
for the problem of sorting N distinct keys, all of which are in a range of integers so
limited that the programmer can build a vector of bits indexed by those integers.
In the program shown in Figure 8.3, I use a Java BitSet, which is abstractly a set
of non-negative integers (implemented as a packed array of 1-bit quantities).
Lets consider a more general technique we can apply even when there are multi-
ple records with the same key. Assume that the keys of the records to be sorted are
in some reasonably small range of integers. Then the function distributionSort2
shown in Figure 8.6 sorts N records stably, moving them from an input array (A)
to a dierent output array (B). It computes the correct nal position in B for each
record in A. To do so, it uses the fact that the position of any record in B is sup-
posed to be the number the records that either have smaller keys than it has, or
that have the same key, but appear before it in A. Figure 8.5 contains an example
of the program in operation.
8.6 Selection sort
In insertion sort, we determine an items nal position piecemeal. Another way to
proceed is to place each record in its nal position in one move by selecting the
smallest (or largest) key at each step. The simplest implementation of this idea is
straight selection sorting, as follows.
static void selectionSort(Record[] A)
{
int N = A.length;
for (int i = 0; i < N-1; i += 1) {
/* A:
ordered
0
items 0..i-1
i N
*/
int m, j;
for (j = i+1, m = i; j < N; j += 1)
if (before(A[j], A[m]) m = j;
/* Now A[m] is the smallest element in A[i..N-1] */
swap(A, i, m);
}
}
Here, swap(A,i,m) is assumed to swap elements i and m of A. This sort is not stable;
the swapping of records prevents stability. On the other hand, the program can be
2
Addison-Wesley, 1986. By the way, that column makes very nice consciousness-raising col-
umn on the subject of appropriately-engineered solutions. I highly recommend both this book and
his More Programming Pearls, Addison-Wesley, 1988.
8.6. SELECTION SORT 145
A: 3/A 2/B 2/C 1/D 4/E 2/F 3/G
count
1
: 0 1 3 2 1
count
2
: 0 1 4 6 7
B
0
: 3/A count: 0 1 5 6 7
B
1
: 2/B 3/A count: 0 2 5 6 7
B
2
: 2/B 2/C 3/A count: 0 3 5 6 7
B
3
: 1/D 2/B 2/C 3/A count: 1 3 5 6 7
B
4
: 1/D 2/B 2/C 3/A 4/E count: 1 3 5 7 7
B
5
: 1/D 2/B 2/C 2/F 3/A 4/E count: 1 4 5 7 7
B
6
: 1/D 2/B 2/C 2/F 3/A 3/G 4/E count: 1 4 6 7 7
Figure 8.5: Illustration of the distributionSort2 program. The values to be
sorted are shown in the array marked A. The keys are the numbers to the left of
the slashes. The data are sorted into the array B, shown at various points in the
algorithm. The labels at the left refer to points in the program in Figure 8.6. Each
point B
k
indicates the situation at the end of the last loop where i = k. The
role of array count changes. First (at count
1
) count[k-1] contains the number of
instances of key (k-1)-1. Next (at count
2
), it contains the number of instances of
keys less than k-1. In the B
i
lines, count[k 1] indicates the position (in B) at
which to put the next instance of key k. (Its k-1 in these places, rather than k,
because 1 is the smallest key.
146 CHAPTER 8. SORTING AND SELECTING
/** Assuming that A and B are not the same array and are of
* the same size, sort the elements of A stably into B.
*/
void distributionSort2(Record[] A, Record[] B)
{
int N = A.length;
int L = min(A), U = max(A);
/* count[i-L] will contain the number of items <i */
// NOTE: count[U-L+1] is not terribly useful, but is
// included to avoid having to test for for i == U in
// the first i loop below.
int[] count = new int[U-L+2];
// Clear count: Not really needed in Java, but a good habit
// to get into for other languages (e.g., C, C++).
for (int j = L; j <= U+1; j += 1)
count[j-L] = 0;
for (int i = 0; i < N; i += 1)
count[key(A[i]) - L + 1] += 1;
/* Now count[i-L] == # of records whose key is equal to i-1 */
// See Figure 8.5, point count
1
.
for (int j = L+1; j <= U; j += 1)
count[j-L] += count[j-L-1];
/* Now count[k-L] == # of records whose key is less than k,
* for all k, L <= k <= U. */
// See Figure 8.5, point count
2
.
for (i = 0; i < N; i += 1) {
/* Now count[k-L] == # of records whose key is less than k,
* or whose key is k and have already been moved to B. */
B[count[key(A[i])-L]] = A[i];
count[key(A[i])-L] += 1;
// See Figure 8.5, points B
0
B
6
.
}
}
Figure 8.6: Distribution Sorting. This program assumes that key(R) is an integer.
8.7. EXCHANGE SORTING: QUICKSORT 147
modied to produce its output in a separate output array, and then it is relatively
easy to maintain stability [how?].
It should be clear that the algorithm above is insensitive to the data. Unlike
insertion sort, it always takes the same number of key comparisonsN(N 1)/2.
Thus, in this naive form, although it is very simple, it suers in comparison to
insertion sort (at least on a sequential machine).
On the other hand, we have seen another kind of selection sort beforeheapsort
(from6.4) is a form of selection sort that (in eect) keeps around information about
the results of comparisons from each previous pass, thus speeding up the minimum
selection considerably.
8.7 Exchange sorting: Quicksort
One of the most popular methods for internal sorting was developed by C. A. R. Hoare
3
.
Evidently much taken with the technique, he named it quicksort. The name is
actually quite appropriate. The basic algorithm is as follows.
static final int K = ...;
void quickSort(Record A[])
{
quickSort(A,0,A.length-1);
insertionSort(A);
}
/* Permute A[L..U] so that all records are < K away from their */
/* correct positions in sorted order. Assumes K > 0. */
void quickSort(Record[] A, int L, int U)
{
if (U-L+1 > K) {
Choose Record T = A[p], where L p U;
P: Set i and permute A[L..U] to establish the partitioning
condition:
key key(T)
0
T
i
key key(T)
N
;
quickSort(A, L, i-1); quickSort(A, i+1, U);
}
}
Here, K is a constant value that can be adjusted to tune the speed of the sort. Once
the approximate sort gets all records within a distance K-1 of their nal locations,
the nal insertion sort proceeds in O(KN) time. If T can be chosen so that its
3
Knuths reference: Computing Journal 5 (1962), pp. 1015.
148 CHAPTER 8. SORTING AND SELECTING
key is near the median key for the records in A, then we can compute roughly that
the time in key comparisons required for performing quicksort on N records is
approximated by C(N), dened as follows.
C(K) = 0
C(N) = N 1 + 2C(N/2|)
This assumes that we can partition an N-element array in N1 comparisons, which
well see to be possible. We can get a sense for the solution by considering the case
N = 2
m
K:
C(N) = 2
m
K + 2C(2
m1
K)
= 2
m
K 1 + 2
m
K 2 + 4C(2
m2
K)
= 2
m
K +. . . + 2
m
K

m
1 2 4 . . . 2
m1
+C(K)
= m2
m
K 2
m
+ 1
(m2
m
K) = (N lg N)
(since lg(2
m
K) = mlg K).
Unfortunately, in the worst casewhere the partition T has the largest or small-
est key, quicksort is essentially a straight selection sort, with running time (N
2
).
Thus, we must be careful in the choice of the partitioning element. One technique
is to choose a random records key for T. This is certainly likely to avoid the bad
cases. A common choice for T is the median of A[L], A[(L+U)/2], and A[U], which
is also unlikely to fail.
Partitioning. This leaves the small loose end of how to partition the array at
each stage (step P in the program above). There are many ways to do this. Here is
one due to Nico Lomutonot the fastest, but simple.
P:
swap(A, L, p);
i = L;
for (int j = L+1; j <= U; j += 1) {
/* A[L..U]:
T
L
<T
i
T
j U
*/
if (before(A[j],T)) {
i += 1;
swap(A, j, i);
}
}
/* A[L..U]:
T
L
<T
i
T
U
*/
8.8. MERGE SORTING 149
swap(A, L, i);
/* A[L..U]:
<T
L
T
i U
*/
Some authors go to the trouble of developing non-recursive versions of quick-
sort, evidently under the impression that they are thereby vastly improving its
performance. This view of the cost of recursion is widely held, so I suppose I cant
be surprised. However, a quick test using a C version indicated about a 3% im-
provement using his iterative version. This is hardly worth obscuring ones code to
obtain.
8.8 Merge sorting
Quicksort was a kind of divide-and-conquer algorithm
4
that we might call try
to divide-and-conquer, since it is not guaranteed to succeed in dividing the data
evenly. An older technique, known as merge sorting, is a form of divide-and-conquer
that does guarantee that the data are divided evenly.
At a high level, it goes as follows.
/** Sort items A[L..U]. */
static void mergeSort(Record[] A, int L, int U)
{
if (L >= U)
return;
mergeSort(A, L, (L+U)/2);
mergeSort(A, (L+U)/2+1, U);
merge(A, L, (L+U)/2, A, (L+U)/2+1, U, A, L);
}
The merge program has the following specication
/** Assuming V0[L0..U0] and V1[L1..U1] are each sorted in */
/* ascending order by keys, set V2[L2..U2] to the sorted contents */
/* of V0[L0..U0], V1[L1..U1]. (U2 = L2+U0+U1-L0-L1+1). */
void merge(Record[] V0, int L0, int U0, Record[] V1, int L1, int U1,
Record[] V2, int L2)
Since V0 and V1 are in ascending order already, it is easy to do this in (N) time,
where N = U2 L2 + 1, the combined size of the two arrays. Merging progresses
through the arrays from left to right. That makes it well-suited for computers with
small memories and lots to sort. The arrays can be on secondary storage devices
4
The term divide-and-conquer is used to describe algorithms that divide a problem into some
number of smaller problems, and then combine the answers to those into a single result.
150 CHAPTER 8. SORTING AND SELECTING
that are restricted to sequential accessi.e., that require reading or writing the
arrays in increasing (or decreasing) order of index
5
.
The real work is done by the merging process, of course. The pattern of these
merges is rather interesting. For simplicity, consider the case where N is a power
of two. If you trace the execution of mergeSort, youll see the following pattern of
calls on merge.
Call V0 V1
#
0. A[0] A[1]
1. A[2] A[3]
2. A[0..1] A[2..3]
3. A[4] A[5]
4. A[6] A[7]
5. A[4..5] A[6..7]
6. A[0..3] A[4..7]
7. A[8] A[9]
etc.
We can exploit this pattern to good advantage when trying to do merge sorting
on linked lists of elements, where the process of dividing the list in half is not
as easy as it is for arrays. Assume that records are linked together into Lists.
The program below shows how to perform a merge sort on these lists; Figure 8.7
illustrates the process. The program maintains a binomial comb of sorted sublists,
comb[0 .. M-1], such that the list in comb[i] is either null or has length 2
i
.
/** Permute the Records in List A so as to be sorted by key. */
static void mergeSort(List<Record> A)
{
int M = a number such that 2
M1
length of A;
List<Record>[] comb = new List<Record>[M];
for (int i = 0; i < M; i += 1)
comb[i] = new LinkedList<Record> ();
for (Record R : A)
addToComb(comb, R);
A.clear ();
for (List<Record> L : comb)
mergeInto(A, L);
}
5
A familiar movie cliche of decades past was spinning tape units to indicate that some piece of
machinery was a computer (also operators ipping console switchessomething one almost never
really did during normal operation). When those images came from footage of real computers, the
computer was most likely sorting.
8.9. SPEED OF COMPARISON-BASED SORTING 151
At each point, the comb contains sorted lists that are to be merged. We rst build
up the comb one new item at a time, and then take a nal pass through it, merging
all its lists. To add one element to the comb, we have
/** Assuming that each C[i] is a sorted list whose length is either 0
* or 2
i
elements, adds P to the items in C so as to
* maintain this same condition. */
static void addToComb(List<Record> C[], Record p)
{
if (C[0].size () == 0) {
C[0].add (p);
return;
} else if (before(C[0].get (0), p))
C[0].add (p);
else
C[0].add (p, 0);
// Now C[0] contains 2 items
int i;
for (i = 1; C[i].size () != 0; i += 1)
mergeLists(C[i], C[i-1]);
C[i] = C[i-1]; C[i-1] = new LinkedList ();
}
I leave to you the mergeLists procedure:
/** Merge L1 into L0, producing a sorted list containing all the
* elements originally in L0 and L1. Assumes that L0 and L1 are
* each sorted initially (according to the before ordering).
* The result ends up in L0; L1 becomes empty. */
static void mergeLists (List<Record> L0, List<Record> L1) ...
8.8.1 Complexity
The optimistic time estimate for quicksort applies in the worst case to merge sorting,
because merge sorts really do divide the data in half with each step (and merging of
two lists or arrays takes linear time). Thus, merge sorting is a (N lg N) algorithm,
with N the number of records. Unlike quicksort or insertion sort, merge sorting as
I have described it is generally insensitive to the ordering of the data. This changes
somewhat when we consider external sorting, but O(N lg N) comparisons remains
an upper bound.
8.9 Speed of comparison-based sorting
Ive presented a number of algorithms and have claimed that the best of them require
(N lg N) comparisons in the worst case. There are several obvious questions to
152 CHAPTER 8. SORTING AND SELECTING
L: (9, 15, 5, 3, 0, 6, 10, 1, 2, 20, 8)
0 0:
0 1:
0 2:
0 3:
0 elements processed
L: (15, 5, 3, 0, 6, 10, 1, 2, 20, 8)
1 0: (9)
0 1:
0 2:
0 3:
1 element processed
L: (5, 3, 0, 6, 10, 1, 2, 20, 8)
0 0:
1 1: (9, 15)
0 2:
0 3:
2 elements processed
L: (3, 0, 6, 10, 1, 2, 20, 8)
1 0: (5)
1 1: (9, 15)
0 2:
0 3:
3 elements processed
L: (0, 6, 10, 1, 2, 20, 8)
0 0:
0 1:
1 2: (3, 5, 9, 15)
0 3:
4 elements processed
L: (10, 1, 2, 20, 8)
0 0:
1 1: (0, 6)
1 2: (3, 5, 9, 15)
0 3:
6 elements processed
L:
1 0: (8)
1 1: (2, 20)
0 2:
1 3: (1, 0, 3, 5, 6, 9, 10, 15)
11 elements processed
Figure 8.7: Merge sorting of lists, showing the state of the comb after various
numbers of items from the list L have been processed. The nal step is to merge
the lists remaining in the comb after all 11 elements from the original list have
been added to it. The 0s and 1s in the small boxes are decorations to illustrate the
pattern of merges that occurs. Each empty box has a 0 and each non-empty box
has a 1. If you read the contents of the four boxes as a single binary number, units
bit on top, it equals the number of elements processed.
8.9. SPEED OF COMPARISON-BASED SORTING 153
ask about this bound. First, how do comparisons translate into instructions?
Second, can we do better than N lg N?
The point of the rst question is that I have been a bit dishonest to suggest that
a comparison is a constant-time operation. For example, when comparing strings,
the size of the strings matters in the time required for comparison in the worst case.
Of course, on the average, one expects not to have to look too far into a string to
determine a dierence. Still, this means that to correctly translate comparisons into
instructions, we should throw in another factor of the length of the key. Suppose
that the N records in our set all have distinct keys. This means that the keys
themselves have to be (lg N) long. Assuming keys are no longer than necessary,
and assuming that comparison time goes up proportionally to the size of a key (in
the worst case), this means that sorting really takes (N(lg N)
2
) time (assuming
that the time required to move one of these records is at worst proportional to the
size of the key).
As to the question about whether it is possible to do better than (N lg N), the
answer is that if the only information we can obtain about keys is how they com-
pare to each other, then we cannot do better than (N lg N). That is, (N lg N)
comparisons is a lower bound on the worst case of all possible sorting algorithms
that use comparisons.
The proof of this assertion is instructive. A sorting program can be thought of
as rst performing a sequence of comparisons, and then deciding how to permute
its inputs, based only on the information garnered by the comparisons. The two
operations actually get mixed, of course, but we can ignore that fact here. In order
for the program to know enough to permute two dierent inputs dierently, these
inputs must cause dierent sequences of comparison results. Thus, we can represent
this idealized sorting process as a tree in which the leaf nodes are permutations and
the internal nodes are comparisons, with each left child containing the comparisons
and permutations that are performed when the comparison turns out true and the
right child containing those that are performed when the comparison turns out false.
Figure 8.8 illustrates this for the case N = 3. The height of this tree corresponds to
the number of comparisons performed. Since the number of possible permutations
(and thus leaves) is N!, and the minimal height of a binary tree with M leaves is
lg M|, the minimal height of the comparison tree for N records is roughly lg(N!).
Now
lg N! = lg N + lg(N 1) +. . . + 1
lg N + lg N +. . . + lg N
= N lg N
O(N lg N)
and also (taking N to be even)
lg N! lg N + lg(N 1) +. . . + lg(N/2)
(N/2 + 1) lg(N/2)
(N lg N)
154 CHAPTER 8. SORTING AND SELECTING
A < B
B < C
(A, B, C)
A < C
(A, C, B) (C, A, B)
A < C
(B, A, C)
B < C
(B, C, A) (C, B, A)
Figure 8.8: A comparison tree for N = 3. The three values being sorted are A,
B, and C. Each internal node indicates a test. The left children indicate what
happens when the test is successful (true), and the right children indicate what
happens if it is unsuccessful. The leaf nodes (rectangular) indicate the ordering of
the three values that is uniquely determined by the comparison results that lead
down to them. We assume here that A, B, and C are distinct. This tree is optimal,
demonstrating that three comparisons are needed in the worst case to sort three
items.
so that
lg N! (N lg N).
Thus any sorting algorithm that uses only (true/false) key comparisons to get in-
formation about the order of its inputs keys requires (N lg N) comparisons in the
worst case to sort N keys.
8.10 Radix sorting
To get the result in 8.9, we assumed that the only examination of keys available was
comparing them for order. Suppose that we are not restricted to simply comparing
keys. Can we improve on our O(N lg N) bounds? Interestingly enough, we can,
sort of. This is possible by means of a technique known as radix sort.
Most keys are actually sequences of xed-size pieces (characters or bytes, in
particular) with a lexicographic ordering relationthat is, the key k
0
k
1
k
n1
is
less than k

0
k

1
k

n1
if k
0
< k

0
or k
0
= k

0
and k
1
k
n1
is less than k

1
k

n1
(we can always treat the keys as having equal length by choosing a suitable padding
character for the shorter string). Just as in a search trie we used successive charac-
ters in a set of keys to distribute the strings amongst subtrees, we can use successive
characters of keys to sort them. There are basically two varieties of algorithm
one that works from least signicant to most signicant digit (LSD-rst) and one
that works from most signicant to least signicant digit (MSD-rst). I use digit
8.10. RADIX SORTING 155
here as a generic term encompassing not only decimal digits, but also alphabetic
characters, or whatever is appropriate to the data one is sorting.
8.10.1 LSD-rst radix sorting
The idea of the LSD-rst algorithm is to rst use the least signicant character to
order all records, then the second-least signicant, and so forth. At each stage, we
perform a stable sort, so that if the k most signicant characters of two records
are identical, they will remain sorted by the remaining, least signicant, characters.
Because characters have a limited range of values, it is easy to sort them in linear
time (using, for example, distributionSort2, or, if the records are kept in a linked
list, by keeping an array of list headers, one for each possible character value).
Figure 8.9 illustrates the process.
LSD-rst radix sort is precisely the algorithm used by card sorters. These ma-
chines had a series of bins and could be programmed (using plugboards) to drop
cards from a feeder into bins depending on what was punched in a particular col-
umn. By repeating the process for each column, one ended up with a sorted deck
of cards.
Each distribution of a record to a bin takes (about) constant time (assuming we
use pointers to avoid moving large amounts of data around). Thus, the total time
is proportional to the total amount of key datawhich is the total number of bytes
in all keys. In other words, radix sorting is O(B) where B is the total number of
bytes of key data. If keys are K bytes long, then B = NK, where N is the number
of records. Since merge sorting, heap sorting, etc., require O(N lg N) comparisons,
each requiring in the worst case K time, we get a total time of O(NK lg N) =
O(Blg N) time for these sorts. Even if we assume constant comparison time, if
keys are no longer than they have to be (in order to provide N dierent keys we
must have K log
C
N, where C is the number of possible characters), then radix
sorting is also O(N lg N).
Thus, relaxing the constraint on what we can do to keys yields a fast sorting
procedure, at least in principle. As usual, the Devil is in the details. If the keys
are considerably longer than log
C
N, as they very often are, the passes made on the
last characters will typically be largely wasted. One possible improvement, which
Knuth credits to M. D. Maclaren, is to use LSD-rst radix sort on the rst two
characters, and then nish with an insertion sort (on the theory that things will
almost be in order after the radix sort). We must fudge the denition of character
for this purpose, allowing characters to grow slightly with N. For example, when
N = 100000, Maclarens optimal procedure is to sort on the rst and second 10-bit
segments of the key (on an 8-bit machine, this is the rst 2.25 characters). Of
course, this technique can, in principle, make no guarantees of O(B) performance.
8.10.2 MSD-rst radix sorting
Performing radix sort starting at the most signicant digit probably seems more
natural to most of us. We sort the input by the rst (most-signicant) character
into C (or fewer) subsequences, one for each starting character (that is, the rst
156 CHAPTER 8. SORTING AND SELECTING
be

cad
d
can
con
n
bet
let
bat
cat
set
t
Initial: set, cat, cad, con, bat, can, be, let, bet
After rst pass: be, cad, con, can, set, cat, bat, let, bet
bat
cat
can
cad
a
bet
let
set
be
e
con
o
After second pass: cad, can, cat, bat, be, set, let, bet, con
bet
be
bat
b
con
cat
can
cad
c
let
l
set
s
After nal pass: bat, be, bet, cad, can, cat, con, let, set
Figure 8.9: An example of a LSD-rst radix sort. Each pass sorts by one character,
starting with the last. Sorting consists of distributing the records to bins indexed
by characters, and then concatenating the bins contents together. Only non-empty
bins are shown.
8.10. RADIX SORTING 157
A posn
set, cat, cad, con, bat, can, be, let, bet 0
bat, be, bet / cat, cad, con, can / let / set 1
bat / be, bet / cat, cad, con, can / let / set 2
bat / be / bet / cat, cad, con, can / let / set 1
bat / be / bet / cat, cad, can / con / let / set 2
bat / be / bet / cad / can / cat / con / let / set
Figure 8.10: An example of an MSD radix sort on the same data as in Figure 8.9.
The rst line shows the initial contents of A and the last shows the nal contents.
Partially-sorted segments that agree in their initial characters are separated by
single slash (/) characters. The character indicates the segment of the array that
is about to be sorted and the posn column shows which character position is about
to be used for the sort.
character of all the keys in any given subsequence is the same). Next, we sort each
of the subsequences that has more than one key individually by its second character,
yielding another group of subsequences in which all keys in any given subsequence
agree in their rst two characters. This process continues until all subsequences
are of length 1. At each stage, we order the subsequences, so that one subsequence
precedes another if all its strings precede all those in the other. When we are done,
we simply write out all the subsequences in the proper order.
The tricky part is keeping track of all the subsequences so that they can be
output in the proper order at the end and so that we can quickly nd the next
subsequence of length greater than one. Here is a sketch of one technique for sorting
an array; it is illustrated in Figure 8.10.
static final int ALPHA = size of alphabet of digits;
/** Sort A[L..U] stably, ignoring the first k characters in each key. */
static void MSDradixSort(Record[] A, int L, int U, int k) {
int[] countLess = new int[ALPHA+1];
Sort A[L..U] stably by the kth character of each key, and for each
digit, c, set countLess[c] to the number of records in A
whose kth character comes before c in alphabetical order.
for (int i = 0; i <= ALPHA; i += 1)
if (countLess[i+1] - countLess[i] > 1)
MSDradixSort(A, L + countLess[i],
L + countLess[i+1] - 1, k+1);
}
158 CHAPTER 8. SORTING AND SELECTING
8.11 Using the library
Notwithstanding all the trouble weve taken in this chapter to look at sorting algo-
rithms, in most programs you shouldnt even think about writing your own sorting
subprogram! Good libraries provide them for you. The Java standard library has
a class called java.util.Collections, which contains only static denitions of
useful utilities related to Collections. For sorting, we have
/** Sort L stably into ascending order, as defined by C. L must
* be modifiable, but need not be expandable. */
public static <T> void sort (List<T> L, Comparator<? super T> c) { }
/** Sort L into ascending order, as defined by the natural ordering
* of the elements. L must be modifiable, but need not be expandable. */
public static <T extends Comparable<T>> void sort (List<T> L) { }
These two methods use a form of mergesort, guaranteeing O(N lg N) worst-case
performance. Given these denitions, you should not generally need to write your
own sorting routine unless the sequence to be sorted is extremely large (in partic-
ular, if it requires external sorting), if the items to be sorted have primitive types
(like int), or you have an application where it is necessary to squeeze every single
microsecond out of the algorithm (a rare occurrence).
8.12 Selection
Consider the problem of nding the median value in an arraya value in the array
with as many array elements less than it as greater than it. A brute-force method
of nding such an element is to sort the array and choose the middle element (or a
middle element, if the array has an even number of elements). However, we can do
substantially better.
The general problem is selectiongiven a (generally unsorted) sequence of ele-
ments and a number k, nd the k
th
value in the sorted sequence of elements. Finding
a median, maximum, or minimum value is a special case of this general problem.
Perhaps the easiest ecient method is the following simple adaptation of Hoares
quicksort algorithm.
/** Assuming 0<=k<N, return a record of A whose key is kth smallest
* (k=0 gives the smallest, k=1, the next smallest, etc.). A may
* be permuted by the algorithm. */
Record select(Record[] A, int L, int U, int k) {
Record T = some member of A[L..U];
Permute A[L..U] and find p to establish the partitioning
condition:
key key(T)
L
T
p
key key(T)
U
;
8.12. SELECTION 159
if (p-L == k)
return T;
if (p-L < k)
return select(A, p+1, U, k - p + L - 1);
else
return select(A, L, p-1, k);
}
The key observation here is that when the array is partitioned as for quicksort, the
value T is the (p L)st smallest element; the p L smallest record keys will be in
A[L..p-1]; and the larger record keys will be in A[p+1..U]. Hence, if k < pL, the
k
th
smallest key is in the left part of A and if k > p, it must be the (k p +L1)st
largest key in the right half.
Optimistically, assuming that each partition divides the array in half, the recur-
rence governing cost here (measured in number of comparisons) is
C(1) = 0
C(N) = N +C(N/2|).
where N = UL+1. The N comes from the cost of partitioning, and the C(N/2|)
from the recursive call. This diers from the quicksort and mergesort recurrences
by the fact that the multiplier of C( ) is 1 rather than 2. For N = 2
m
we get
C(N) = 2
m
+C(2
m1
)
= 2
m
+ 2
m1
+C(2
m2
)
= 2
m+1
1 = 2N 1
(N)
This algorithm is only probabilistically good, just as was quicksort. There are
selection algorithms that guarantee linear bounds, but well leave them to a course
on algorithms.
Exercises
8.1. You are given two sets of keys (i.e., so that neither contains duplicate keys),
S
0
and S
1
, both represented as arrays. Assuming that you can compare keys for
greater than or equal to, how would you compute the intersection of the S
0
and
S
1
, and how long would it take?
8.2. Given a large list of words, how would you quickly nd all anagrams in the
list? (An anagram here is a word in the list that can be formed from another word
on the list by rearranging its letters, as in dearth and thread).
160 CHAPTER 8. SORTING AND SELECTING
8.3. Suppose that we have an array, D, of N records. Without modifying this
array, I would like to compute an N-element array, P, containing a permutation of
the integers 0 to N 1 such that the sequence D[P[0]], D[P[1]], . . . , D[P[N 1]] is
sorted stably. Give a general method that works with any sorting algorithm (stable
or not) and doesnt require any additional storage (other than that normally used
by the sorting algorithm).
8.4. A very simple spelling checker simply removes all ending punctuation from
its words and looks up each in a dictionary. Compare ways of doing this from the
classes in the Java library: using an ArrayList to store the words in sorted order, a
TreeSet, and a HashSet. Theres little programming involved, aside from learning
to use the Java library.
8.5. I am given a list of ranges of numbers, [x
i
, x

i
], each with 0 x
i
< x

i
1.
I want to know all the ranges of values between 0 and 1 that are not covered by
one of these ranges of numbers. So, if the only input is [0.25, 0.5], then the output
would be [0.0, 0.25] and [0.5, 1.0] (never mind the end points). Show how to do this
quickly.
Chapter 9
Balanced Searching
Weve seen that binary search trees have a weakness: a tendency to be become
unbalanced, so that they are ineective in dividing the set of data they represent
into two substantially smaller parts. Lets consider what we can do about this.
Of course, we could always rebalance an unbalanced tree by simply laying all
the keys out in order and then re-inserting them in such a way as to keep the tree
balanced. That operation, however, requires time linear in the number of keys in
the tree, and it is dicult to see how to avoid having a (N
2
) factor creep in to
the time required to insert N keys. By contrast, only O(N lg N) time is required
to make N insertions if the data happen to be presented in an order that keeps the
tree bushy. So lets rst look at operations to re-balance a tree (or keep it balanced)
without taking it apart and reconstructing it.
9.1 Balance by Construction: B-Trees
Another way to keep a search tree balanced is to be careful always to insert new
keys in a good place so that the tree remains bushy by construction. The database
community has long used a data structure that does exactly this: the B-tree
1
. We
will describe the data structure and operations abstractly here, rather than give
code, since in practice there is a whole raft of devices one uses to gain speed.
A B-tree of order m is a positional tree with the following properties:
1. All nodes have m or fewer children.
2. All nodes other than the root have at least m/2 children (we can also say that
each node other than the root contains at least m/2| children
2
).
3. A node with children C
0
, C
1
, . . . , C
n1
is labeled with keys K
1
, . . . , K
n1
(think of key K
i
as resting between C
i1
and C
i
), with K
1
< K
2
< <
K
n1
.
1
D. Knuths reference: R. Bayer and E. McCreight, Acta Informatica (1972), 173189, and also
unpublished independent work by M. Kaufman.
2
The notation x means the smallest integer that is x.
161
162 CHAPTER 9. BALANCED SEARCHING
4. A B-tree is a search tree: For any node, all keys in the subtree rooted at C
i
are strictly less than K
i+1
, and (for i > 0), strictly greater than K
i
.
5. All the empty children occur at the same level of the tree.
Figure 9.1 contains an example of an order-4 tree. In real implementations, B-trees
tend to be kept on secondary storage (disks and the like), with their nodes being
read in as needed. We choose m so as to make the transfer of data from secondary
storage as fast as possible. Disks in particular tend to have minimum transfer times
for each read operation, so that for a wide range of values of m, there is little
dierence in the time required to read in a node. Making m too small in that case
is an obviously bad idea.
Well represent the nodes of a B-tree with a structure well call a BTreeNode,
for which well use the following terminology:
B.child(i) Child number i of B-tree node B, where 0 i < m.
B.key(i) Key number i of B-tree node B, where 1 i < m.
B.parent () The parent node of B.
B.index () The integer, i, such that
B == B.parent ().child (i)
B.arity () The number of children in B.
An entire B-tree, then, would consist of a pointer to the root, with perhaps some
extra useful information, such as the current size of the B-tree.
Because of properties (2) and (5), a B-tree containing N keys must have O(log
m/2
N)
levels. Because of property (1), searching a single nodes keys takes O(1) time (we
assume m is xed). Therefore, searching a B-tree by the following obvious recursive
algorithm is an O(log
m
N) = O(lg N) operation:
boolean search (BTreeNode B, Key X) {
if (B is the empty tree)
return false;
else {
Find largest c such that B.key(i) X, for all 1 i c.
if (c > 0 && X.equals (B.key (c)))
return true;
else
return search (B.child (c), K);
}
9.2. TRIES 163
10 20 30 40 50 60 95 100 120 130 140 150
25 55 90 125
115
Figure 9.1: Example of a B-tree of order 4 with integer keys. Circles represent empty
nodes, which appear all at the same level. Each node has two to four children, and
one to three keys. Each key is greater than all keys in the children to its left and
less than all keys in the children to its right.
9.1.1 B-tree Insertion
Initially, we insert into the bottom of a B-tree, just as for binary search trees.
However, we avoid scrawny trees by lling our nodes up and splitting them,
rather than extending them down. The idea is simple: we nd an appropriate place
at the bottom of the tree to insert a given key, and perform the insertion (also
adding an additional empty child). If this makes the node too big (so that it has m
keys and m + 1 (empty) children), we split the node, as in the code in Figure 9.2.
Figure 9.3 illustrates the process.
9.1.2 B-tree deletion
Deleting from a B-tree is generally more complicated than insertion, but not too
bad. As usual, real, production implementations introduce numerous intricacies for
speed. To keep things simple, Ill just describe a straightforward, idealized method.
Taking our cue from the way that insertion works, we will rst move the key to be
deleted down to the bottom of the tree (where deletion is straightforward). Then,
if deletion has made the original node too small, we merge it with a sibling, pulling
down the key that used to separate the two from the parent. The pseudocode in
Figure 9.4 describes the process, which is also illustrated in Figure 9.5.
9.2 Tries
Loosely speaking, balanced (maximally bushy) binary search trees containing N
keys require (lg N) time to nd a key. This is not entirely accurate, of course,
because it neglects the possibility that the time required to compare against a key
depends on the key. For example, the time required to compare two strings depends
on the length of the shorter string. Therefore, in all the places Ive said (lg N)
before, I really meant (Llg N) for L a bound on the number of bytes in the
164 CHAPTER 9. BALANCED SEARCHING
/** Split B-tree node B, which has from m+ 1 to 2m + 1
* children. */
void split (BTreeNode B) {
int k = B.arity () / 2;
Key X = B.key (k);
BTreeNode B2 = a new BTree node;
move B.child (k) through B.child(m) and
B.key (k+1) through B.key (m) out of B and into B2;
remove B.key (k) from B;
if (B was the root) {
create a new root with children B and B2
and with key X;
} else {
BtreeNode P = B.parent ();
int c = B.index ();
insert child B2 at position c+1 in P, and key
X at position c+1 in P, moving subsequent
children and keys of P over as needed;
if (P.arity () > m)
split (P);
}
}
Figure 9.2: Splitting a B-tree node. Figure 9.3 contains illustrations.
9.2. TRIES 165
(a) Insert 15:
10 20 10 15 20
(b) Insert 145:
120 130 140 150
125
120 130 145 150
125 140
(c) Insert 35:
10 15 20 30 40 50 60 95 100
25 55 90
115
10 15 20 30 40 50 60 95 100
25 55 90
35 115
Figure 9.3: Inserting into a B-tree. The examples modify the tree in 9.1 by inserting
15, 145, and then 35.
166 CHAPTER 9. BALANCED SEARCHING
/** Delete B.key(i) from the BTree containing B. */
void deleteKey (BTreeNode B, int i) {
if (Bs children are all empty)
remove B.key(i), moving over remaining keys;
else {
int n = B.child(i-1).arity();
merge (B, i);
// The key we want to delete is now #n of child #i-1.
deleteKey (B.child(i-1), n);
}
if (B.arity () > m) // Happens only on recursive calls
split (B);
regroup (B);
}
/** Move B.key(i) and the contents of B.child(i) into B.child(i-1),
* after its existing keys and children. Remove B.key(i) and
* B.child(i) from B, moving over the remaining contents.
* (Temporarily makes B.child(i-1) too large. The former
* B.child(i) becomes garbage). */
void merge (BTreeNode B, int i) { implementation straightforward }
/** If B has too few children, regroup the B-tree to re-establish
* the B-tree conditions. */
void regroup (BTreeNode B) {
if (B is the root && B.arity () == 1)
make B.child(0) the new root;
else if (B is not the root && B.arity () < m/2) {
if (B.index () == 0)
merge (B.parent (), 1);
else
merge (B.parent (), B.index ());
regroup (B.parent ());
}
}
Figure 9.4: Deleting from a B-tree node. See Figure 9.5 for an illustration.
9.2. TRIES 167
(a) Steps in removing 25:
10 15 20 30
25
10 15 20 25 30 10 20 30
15
(b) Removing 10:
10 20 30 40 50 60 95 100
15 55 90
35 115
15 20 30 40 50 60 95 100
35 55 90
115
Figure 9.5: Deletion from a B-tree. The examples start from the nal tree in
Figure 9.3c. In (a), we remove 25. First, merge to move 25 to the bottom. Then
remove it and split resulting node (which is too big), at 15. Next, (b) shows deletion
of 10 from the tree produced by (a). Deleting 10 from its node at the bottom makes
that node too small, so we merge it, moving 15 down from the parent. That in turn
makes the parent too small, so we merge it, moving 35 down from the root, giving
the nal tree.
168 CHAPTER 9. BALANCED SEARCHING
key. In most applications, this doesnt much matter, since L tends to increase very
slowly, if at all, as N increases. Nevertheless, we come to an interesting question:
we evidently cant get rid of the factor of L too easily (after all, you have to look
at the key youre searching for), but can we get rid of the factor of lg N?
9.2.1 Tries: basic properties and algorithms
It turns out that we can avoid the lg N factor, using a data structure known as a
trie
3
. A pure trie is a kind of tree that represents a set of strings from some alphabet
of xed size, say A = a
0
, . . . , a
M1
. One of the characters is a special delimiter
that appears only at the ends of words, 2. For example, A might be the set of
printable ASCII characters, with 2 represented by an unprintable character, such
as \000 (NUL). A trie, T, may be abstractly dened by the following recursive
denition
4
: A trie, T, is either
empty, or
a leaf node containing a string, or
an internal node containing M children that are also tries. The edges leading
to these children are labeled by the characters of the alphabet, a
i
, like this:
C
a
0
, C
a
1
, . . . C
a
M1
.
We can think of a trie as a tree whose leaf nodes are strings. We impose one other
condition:
If by starting at the root of a trie and following edges labeled s
0
, s
1
, . . . , s
h1
,
we reach a string, then that string begins s
0
s
1
s
h1
.
Therefore, you can think of every internal node of a trie as standing for some prex
of all the strings in the leaves below it: specically, an internal node at level k
stands for the rst k characters of each string below it.
A string S = s
0
s
1
s
m1
is in T if by starting at the root of T and following 0
or more edges with labeled s
0
s
j
, we arrive at the string S. We will pretend that
all strings in T end in 2, which appears only as the last character of a string.
3
How is it pronounced? I have no idea. The word was suggested by E. Fredkin in 1960, who
derived it from the word retrieval. Despite this etymology, I usually pronounce it like try to
avoid verbal confusion with tree.
4
This version of the trie data structure is described in D. E. Knuth, The Art of Programming,
vol. 3, which is the standard reference on sorting and searching. The original data structure,
proposed in 1959 by de la Briandais, was slightly dierent.
9.2. TRIES 169
a
a
2
a2
b
ab
a
aba
s
abas
e
abase2
h
abash2
t
abate2
b
abbas2
x
ax
e
axe2
o
axolotl2
f
f
a
fa
b
fabric2
c
facet2
Figure 9.6: A trie containing the set of strings a, abase, abash, abate, abbas, axe,
axolotl, fabric, facet. The internal nodes are labeled to show the string prexes to
which they correspond.
a
a
2
a2
b
ab
a
aba
s
abas
e
abase2
h
abash2
t
abate2
b
abbas2
x
ax
e
axe2
o
axolotl2
b
bat2
f
f
a
fa
b
fabric2
c
fac
e
face
p
faceplate2
t
facet2
Figure 9.7: Result of inserting the strings bat and faceplate into the trie in
Figure 9.6.
170 CHAPTER 9. BALANCED SEARCHING
Figure 9.6 shows a trie that represents a small set of strings. To see if a string
is in the set, we start at the root of the trie and follow the edges (links to children)
marked with the successive characters in the string we are looking for (including
the imaginary 2 at the end). If we succeed in nding a string somewhere along
this path and it equals the string we are searching for, then the string we are
searching for is in the trie. If we dont, it is not in the trie. For each word, we
need internal nodes only as far down as there are multiple words stored that start
with the characters traversed to that point. The convention of ending everything
with a special character allows us to distinguish between a situation in which the
trie contains two words, one of which is a prex of the other (like a and abate),
from the situation where the trie contains only one long word.
From a trie users point of view, it looks like a kind of tree with String labels:
public abstract class Trie {
/** The empty Trie. */
public static final Trie EMPTY = new EmptyTrie();
/** The label at this node. Defined only on leaves. */
abstract public String label();
/** True is X is in this Trie. */
public boolean isIn(String x) ...
/** The result of inserting X into this Trie, if it is not
* already there, and returning this. This trie is
* unchanged if X is in it already. */
public Trie insert(String x) ...
/** The result of removing X from this Trie, if it is present.
* The trie is unchanged if X is not present. */
public Trie remove(String x) ...
/** True if this Trie is a leaf (containing a single String). */
abstract public boolean isLeaf();
/** True if this Trie is empty */
abstract public boolean isEmpty();
/** The child numbered with character K. Requires that this node
* not be empty. Child 0 corresponds to 2. */
abstract public Trie child(int k);
/** Set the child numbered with character K to C. Requires that
* this node not be empty. (Intended only for internal use. */
abstract protected void setChild(int k, Trie C);
}
9.2. TRIES 171
The following algorithm describes a search through a trie.
/** True is X is in this Trie. */
public boolean isIn(String x) {
Trie P = longestPrefix(x, 0);
return P.isLeaf() && x.equals(P.label());
}
/** The node representing the longest prefix of X.substring(K) that
* matches a String in this trie. */
private Trie longestPrefix(String x, int k) {
if (isEmpty() || isLeaf())
return this;
int c = nth(x, k);
if (child(c).isEmpty())
return this;
else
return child(c).longestPrefix(x, k+1);
}
/** Character K of X, or 2 if K is off the end of X. */
static char nth(String x, int k) {
if (k >= x.length())
return (char) 0;
else
return x.charAt(k);
}
It should be clear from following this procedure that the time required to nd
a key is proportional to the length of the key. In fact, the number of levels of the
trie that need to be traversed can be considerably less than the length of the key,
especially when there are few keys stored. However, if a string is in the trie, you
will have to look at all its characters, so isIn has a worst-case time of (x.length).
To insert a key X in a trie, we again nd the longest prex of X in the trie,
which corresponds to some node P. Then, if P is a leaf node, we insert enough
internal nodes to distinguish X from P.label(). Otherwise, we can insert a leaf
for X in the appropriate child of P. Figure 9.7 illustrates the results of adding bat
and faceplate to the trie in Figure 9.6. Adding bat simply requires adding a
leaf to an existing node. Adding faceplate requires inserting two new nodes rst.
The method insert below performs the trie insertion.
/** The result of inserting X into this Trie, if it is not
* already there, and returning this. This trie is
* unchanged if X is in it already. */
public Trie insert(String X)
172 CHAPTER 9. BALANCED SEARCHING
{
return insert(X, 0);
}
/** Assumes this is a level L node in some Trie. Returns the */
* result of inserting X into this Trie. Has no effect (returns
* this) if X is already in this Trie. */
private Trie insert(String X, int L)
{
if (isEmpty())
return new LeafTrie(X);
int c = nth(X, L);
if (isLeaf()) {
if (X.equals(label()))
return this;
else if (c == label().charAt(L))
return new InnerTrie(c, insert(X, L+1));
else {
Trie newNode = new InnerTrie(c, new LeafTrie(X));
newNode.child(label().charAt(L), this);
return newNode;
}
} else {
child(c, child(c).insert(X, L+1));
return this;
}
}
Here, the constructor for InnerTrie(c, T), described later, gives us a Trie for which
child(c) is T and all other children are empty.
Deleting from a trie just reverses this process. Whenever a trie node is reduced
to containing a single leaf, it may be replaced by that leaf. The following program
indicates the process.
public Trie remove(String x)
{
return remove(x, 0);
}
/** Remove x from this Trie, which is assumed to be level L, and
* return the result. */
private Trie remove(String x, int L)
{
if (isEmpty())
return this;
if (isLeaf(T)) {
9.2. TRIES 173
if (x.equals(label()))
return EMPTY;
else
return this;
}
int c = nth(x, L);
child(c, child(c).remove(x, L+1));
int d = onlyMember();
if (d >= 0)
return child(d);
return this;
}
/** If this Trie contains a single string, which is in
* child(K), return K. Otherwise returns -1.
private int onlyMember() { /* Left to the reader. */ }
9.2.2 Tries: Representation
We are left with the question of how to represent these tries. The main problem
of course is that the nodes contain a variable number of children. If the number of
children in each node is small, a linked tree representation like those described in
5.2 will work. However, for fast access, it is traditional to use an array to hold the
children of a node, indexed by the characters that label the edges.
This leads to something like the following:
class EmptyTrie extends Trie {
public boolean isEmpty() { return true; }
public boolean isLeaf() { return false; }
public String label() { throw new Error(...); }
public Trie child(int c) { throw new Error(...); }
protected void child(int c, Trie T) { throw new Error(...); }
}
class LeafTrie extends Trie {
private String L;
/** A Trie containing just the string S. */
LeafTrie(String s) { L = s; }
public boolean isEmpty() { return false; }
public boolean isLeaf() { return true; }
public String label() { return L; }
public Trie child(int c) { return EMPTY; }
protected void child(int c, Trie T) { throw new Error(...); }
}
174 CHAPTER 9. BALANCED SEARCHING
class InnerTrie extends Trie {
// ALPHABETSIZE has to be defined somewhere */
private Trie[] kids = new kids[ALPHABETSIZE];
/** A Trie with child(K) == T and all other children empty. */
InnerTrie(int k, Trie T) {
for (int i = 0; i < kids.length; i += 1)
kids[i] = EMPTY;
child(k, T);
}
public boolean isEmpty() { return false; }
public boolean isLeaf() { return false; }
public String label() { throw new Error(...); }
public Trie child(int c) { return kids[c]; }
protected void child(int c, Trie T) { kids[c] = T; }
}
9.2.3 Table compression
Actually, our alphabet is likely to have holes in itstretches of encodings that
dont correspond to any character that will appear in the Strings we insert. We
could cut down on the size of the inner nodes (the kids arrays) by performing a
preliminary mapping of chars into a compressed encoding. For example, if the
only characters in our strings are the digits 09, then we could re-do InnerTrie as
follows:
class InnerTrie extends Trie {
private static char[] charMap = new char[9+1];
static {
charMap[0] = 0;
charMap[0] = 1; charMap[1] = 1; ...
}
public Trie child(int c) { return kids[charMap[c]]; }
protected void child(int c, Trie T) { kids[charMap[c]] = T; }
}
This helps, but even so, arrays that may be indexed by all characters valid in a
key are likely to be relatively large (for a tree node)say on the order of M = 60
bytes even for nodes that can contain only digits (assuming 4 bytes per pointer, 4
bytes overhead for every object, 4 bytes for a length eld in the array). If there is a
total of N characters in all keys, then the space needed is bounded by about NM/2.
The bound is reached only in a highly pathological case (where the trie contains only
9.3. RESTORING BALANCE BY ROTATION 175
two very long strings that are identical except in their last characters). Nevertheless,
the arrays that arise in tries can be quite sparse.
One approach to solving this is to compress the tables. This is especially applica-
ble when there are few insertions once some initial set of strings is accommodated.
By the way, the techniques described below are generally applicable to any such
sparse array, not just tries.
The basic idea is that sparse arrays (i.e., those that mostly contain empty or
null entries) can be overlaid on top of each other by making sure that the non-null
entries in one fall on top of null entries in the others. We allocate all the arrays in
a single large one, and store extra information with each entry so that we can tell
which of the overlaid arrays that entry belongs to. Figure 9.8 shows an appropriate
alternative data structure.
The idea is that when we store everybodys array of kids in one place, and store
an edge label that tells us what character is supposed to correspond to each kid.
That allows us to distinguish between a slot that contains somebody elses child
(which means that I have no child for that character), and a slot that contains one
of my children. We arrange that the me eld for every node is unique by making
sure that the 0
th
child (corresponding to 2) is always full.
As an example, Figure 9.9 shows the ten internal nodes of the trie in Figure 9.7
overlaid on top of each other. As the gure illustrates, this representation can be
very compact. The number of extra empty entries that are needed on the right (to
prevent indexing o the end of the array) is limited to M 1, so that it becomes
negligible when the array is large enough. (Aside: When dealing with a set of arrays
that one wishes to compress in this way, it is best to allocate the fullest (least sparse)
rst.)
Such close packing comes at a price: insertions are expensive. When one adds a
new child to an existing node, the necessary slot may already be used by some other
array, making it necessary to move the node to a new location by (in eect) rst
erasing its non-null entries from the packed storage area, nding another spot for
it and moving its entries there, and nally updating the pointer to the node being
moved in its parent. There are ways to mitigate this, but we wont go into them
here.
9.3 Restoring Balance by Rotation
Another approach is to nd an operation that changes the balance of a BST
choosing a new root that moves keys from a deep side to a shallow sidewhile
preserving the binary search tree property. The simplest such operations are the
rotations of a tree. Figure 9.10 shows two BSTs holding identical sets of keys.
Consider the rightRotation rst (the left is a mirror image). First, the rotation
preserves the binary search tree property. In the unrotated tree, the nodes in A are
176 CHAPTER 9. BALANCED SEARCHING
abstract class Trie {
...
static protected Trie[] allKids;
static protected char[] edgeLabels;
static final char NOEDGE = /* Some char that isnt used. */
static {
allKids = new Trie[INITIAL_SPACE];
edgeLabels = new char[INITIAL_SPACE];
for (int i = 0; i < INITIAL_SPACE; i += 1) {
allKids[i] = EMPTY; edgeLabels[i] = NOEDGE;
}
}
...
}
class InnerTrie extends Trie {
/* Position of my child 0 in allKids. My kth child, if
* non-empty, is at allKids[me + k]. If my kth child is
* not empty, then edgeLabels[me+k] == k. edgeLabels[me]
* is always 0 (2). */
private int me;
/** A Trie with child(K) == T and all other children empty. */
InnerTrie(int k, Trie T) {
// Set me such that edgeLabels[me + k].isEmpty(). */
child(0, EMPTY);
child(k, T);
}
public Trie child(int c) {
if (edgeLabels[me + c] == c)
return allKids[me + c];
else
return EMPTY;
}
protected void child(int c, Trie T) {
if (edgeLabels[me + c] != NOEDGE &&
edgeLabels[me + c] != c) {
// Move my kids to a new location, and point me at it.
}
allKids[me + c] = T;
edgeLabels[me + c] = c;
}
}
Figure 9.8: Data structures used with compressed Tries.
9.3. RESTORING BALANCE BY ROTATION 177

2
a2
2

b 2 2 2
root:
2

b
bat2

e
axe2

e
abase2
2

h
abash2
2

p
faceplate2

o
axolotl2

t
facet2

t
abate2

x 2

b
fabric2

c 2

b
abbas2
. . .
Figure 9.9: A packed version of the trie from Figure 9.7. Each of the trie nodes from
that gure is represented as an array of children indexed by character, the character
that is the index of a child is stored in the upper row (which corresponds to the array
edgeLabels). The pointer to the child itself is in the lower row (which corresponds
to the allKids array). Empty boxes on top indicate unused locations (the NOEDGE
value). To compress the diagram, Ive changed the character set encoding so that 2
is 0, a is 1, b is 2, etc. The crossed boxes in the lower row indicate empty nodes.
There must also be an additional 24 empty entries on the right (not shown) to
account for the cz entries of the rightmost trie node stored. The search algorithm
uses edgeLabels to determine when an entry actually belongs to the node it is
currently examining. For example, the root node is supposed to contain entries for
a, b, and f. And indeed, if you count 1, 2, and 6 over from the root box
above, youll nd entries whose edge labels are a, b, and f. If, on the other
hand, you count over 3 from the root box, looking for the non-existent c edge, you
nd instead an edge label of e, telling you that the root node has no c edge.
the exactly the ones less than B, as they are on the right; D is greater, as on the
right; and subtree C is greater, as on the right. You can also assure yourself that
the nodes under D in the rotated tree bear the proper relation to it.
Turning to height, lets use the notation H
A
, H
C
, H
E
, H
B
, and H
D
to denote
the heights of subtrees A, C, and E and of the subtrees whose roots are nodes B
and D. Any of A, C, or E can be empty; well take their heights in that case to
be 1. The height of the tree on the left is 1 + max(H
E
, 1 + H
A
, 1 + H
C
). The
height of the tree on the right is 1 + max(H
A
, 1 +H
C
, 1 +H
E
). Therefore, as long
as H
A
> max(H
C
+ 1, H
E
) (as would happen in a left-leaning tree, for example),
the height of the right-hand tree will be less than that of the left-hand tree. One
gets a similar situation in the other direction.
In fact, it is possible to convert any BST into any other that contains the same
keys by means of rotations. This amounts to showing that by rotation, we can move
any node of a BST to the root of the tree while preserving the binary search tree
178 CHAPTER 9. BALANCED SEARCHING
A C
B
E
D
C E
D
A
B
D.rotateRight()
B.rotateLeft()
Figure 9.10: Rotations in a binary search tree. Triangles represent subtrees and
circles represent individual nodes. The binary search tree relation is maintained by
both operations, but the levels of various nodes are aected.
property [why is this sucient?]. The argument is an induction on the structure of
trees.
It is clearly possible for empty or one-element trees.
Suppose we want to show it for a larger tree, assuming (inductively) that all
smaller trees can be rotated to bring any of their nodes their root. We proceed
as follows:
If the node we want to make the root is already there, were done.
If the node we want to make the root is in the left child, rotate the left
child to make it the root of the left child (inductive hypothesis). The
perform a right rotation on the whole tree.
Similarly if the node we want is in the right child.
9.3.1 AVL Trees
Of course, knowing that it is possible to re-arrange a BST by means of rotation
doesnt tell us which rotations to perform. The AVL tree is an example of a tech-
nique for keeping track of the heights of subtrees and performing rotations when
they get too far out of line. An AVL tree
5
is simply a BST that satises the
AVL Property: the heights of the left and right subtrees of every node
dier by at most one.
Adding or deleting a node at the bottom of such a tree (as happens with the simple
BST insertion and deletion algorithms from 6.1) may invalidate the AVL property,
but it may be restored by working up toward the root from the point of the insertion
or deletion and performing certain selected rotations, depending on the nature of
the imbalance that needs to be corrected. In the following diagrams, the expressions
in the subtrees indicate their heights. An unbalanced subtree having the form
5
The name is taken from the names of the two discoverers, Adelson-Velski and Landis.
9.3. RESTORING BALANCE BY ROTATION 179
h h + 1
h
can be rebalanced with a single left rotation, giving an AVL tree of the form:
h h
h + 1
Finally, consider an unbalanced tree of the form
h

B
h
C
h
A
where at least one of h

and h

is h and the other is either h or h1. Here, we can


rebalance by performing two rotations, rst a right rotation on C, and then a left
rotation on A, giving the correct AVL tree
h h

h
A C
B
The other possible cases of imbalance that result from adding or removing a single
node are mirror images of these.
Thus, if we keep track of the heights of all subtrees, we can always restore the
AVL property, starting from the point of insertion or deletion at the base of the
tree and proceeding upwards. In fact, it turns out that it isnt necessary to know
the precise heights of all subtrees, but merely to keep track of the three cases at
each node: the two subtrees have the same height, the height of the left subtree is
180 CHAPTER 9. BALANCED SEARCHING
greater by 1, and the height of the right subtree is greater by 1.
9.4 Skip Lists
The B-tree was an example of a search tree in which nodes had variable numbers
of children, with each child representing some ordered set of keys. It speeds up
searches as does a vanilla binary search tree by subdividing the keys at each node
into disjoint ranges of keys, and contrives to keep these sequences of comparable
length, balancing the tree. Here we look at another structure that does much the
same thing, except that it uses rotation as needed to approximately balance the tree
and it merely achieves this balance with high probability, rather than with certainty.
Consider the same set of integer keys from Figure 9.1, arranged into a search tree
where each node has one key and any number of children, and the children of any
node all have keys that are at least as large as that of their parent. Figure 9.11
shows a possible arrangement. The maximum heights at which the keys appear are
chosen independently according to a rule that gives a probability of (1 p)p
k
of a
key appearing being at height k (0 being the bottom). That is, 0 < p < 1 is an
arbitrary constant that represents the approximate proportion of all nodes at height
e that have height > e. We add a minimal () key at the left with sucient
height to serve as a root for the whole tree.
Figure 9.11 shows an example, created using p = 0.5. To look for a key, we
can scan this tree from left to right starting at any level and working downwards.
Starting at the bottom (level 0) just gives us a simple linear search. At higher levels,
we search a forest of trees, choosing which forest to examine more closely on the
basis of the value of its root node. To see if 127 is a member, for example, we can
look at
the rst 15 entries of level 0 (not including ) [15 entries]; or
the rst 7 level-1 entries, and then the 2 level-0 items below the key 120 [9
entries]; or
the rst 3 level-2 entries, then the level-1 entry 140, and then the 2 level-0
items below 120 [6 entries]; or
the level-3 entry 90, then the level-2 entry 120, then the level-1 entry 140, and
then the 2 level-0 items below 120 [5 entries].
We can represent this tree as a kind of linear list of nodes in in-order (see
Figure 9.12) in which the nodes have random numbers of next links, and the i
th
next link in each (numbering from 0 as usual) is connected to the next node that
has at least i + 1 links. This list-like representation, with some links skipping
arbitrary numbers of list elements, explains the name given to this data structure:
the skip list
6
.
6
William Pugh, Skip lists: A probabilistic alternative to balanced trees, Comm. of the ACM,
33, 6 (June, 1990) pp. 668676.
9.4. SKIP LISTS 181
0
1
2
3
4

10
20
20
20 25
25 30 40 50
55
55 60
90
90
90
90
95
95 100 115
120
120
120
125 130
140
140 150
Figure 9.11: An abstract view of a skip list, showing its relationship to a (non-
binary) search tree. Each key other than is duplicated to a random height.
We can search this structure beginning at any level. In the best case, to search
(unsuccessfully) for the target value 127, we need only look at the keys in the
shaded nodes. Darker shaded nodes indicate keys larger than 127 that bound the
search.

0
1
2
3
10 20 25 30 40 50 55 60 90 95 100 115 120 125 130 140 150
Figure 9.12: The skip list from Figure 9.11, showing a possible representation. The
data structure is an ordered list whose nodes contain random numbers of pointers to
later nodes (which allow intervening items in the list to be skipped during a search;
hence the name). If a node has at least k pointers, then it contains a pointer to the
next node that has at least k pointers. A node for at the right allows us to avoid
tests for null. Again, the nodes looked at during a search for 127 are shaded; the
darker shading indicates nodes that limit the search.

0
1
2
3
10 25 30 40 50 55 60 90 95 100 115 120 125 126 127 130 140 150
Figure 9.13: The skip list from Figure 9.12 after inserting 127 and 126 (in either
order), and deleting 20. Here, the 127 node is randomly given a height of 5, and
the 126 node a height of 1. The shaded nodes show which previously existing nodes
need to change. For the two insertions, the nodes needing change are the same as
the light-shaded nodes that were examined to search for 127 (or 126), plus the
nodes at the ends (if they need to be heightened).
182 CHAPTER 9. BALANCED SEARCHING
Searching is very simple. If we denote the value at one of these nodes as L.value
(here, well use integer keys) and the next pointer at height k as L.next[k], then:
/** True iff X is in the skip list beginning at node L at
* a height <= K, where K>=0. */
static boolean contains (SkipListNode L, int k, int x) {
if (x == L.next[k].value)
return true;
else if (x > L.next[k].value)
return contains (L.next[k], k, x);
else if (k > 0)
return contains (L, k-1, x);
else
return false;
}
We can start the at any level k 0 up to the height of the tree. It turns out that
a reasonable place to start for a list containing N nodes is at level log
1/p
N, as
explained below.
To insert or delete into the list, we nd the position of the node to be added
or deleted by the process above, keeping track of the nodes we traverse to do so.
When the item is added or deleted, these are the nodes whose pointers may need
to be updated. When we insert nodes, we choose a height for them randomly in
such a way that the number of nodes at height k + 1 is roughly pk, where p is
some probability (typical values for which might be 0.5 or 0.25). That is, if we
are shooting for a roughly n-ary search tree, we let p = 1/n. A suitable procedure
might look like this:
/** A random integer, h, in the range 0 .. MAX such that
* Pr(h k) = P
k
, 0 k MAX. */
static int randomHeight (double p, int max, Random r) {
int h;
h = 0;
while (h < max && r.nextDouble () < p)
h += 1;
return h;
}
In general, it is pointless to accommodate arbitrarily large heights, so we impose
some maximum, generally the logarithm (base 1/p) of the maximum number of keys
one expects to need.
Intuitively, any sequence of M inserted nodes each of whose heights is at least
k will be randomly broken about every 1/p nodes by one whose height is strictly
greater than k. Likewise, for nodes of height at least k + 1, and so forth. So, if
our list contains N items, and we start looking at level log
1/p
N, wed expect to
look at most at roughly (1/p) log
1/p
N keys (that is, 1/p keys at each of log
1/p
N
levels). In other words, (lg N) on average, which is what we want. Admittedly,
9.4. SKIP LISTS 183
this analysis is a bit handwavy, but the true bound is not signicantly larger. Since
inserting and deleting consists of nding the node, plus some insertion or deletion
time proportional to the nodes height, we actually have (lg N) expected bounds
on search, insertion, and deletion.
Exercises
9.1. Fill in the following to agree with its comments:
/** Return a modified version of T containing the same nodes
* with the same inorder traversal, but with the node containing
* label X at the root. Does not create any new Tree nodes. */
static Tree rotateUp (Tree T, Object X) {
// FILL IN
}
9.2. What is the maximum height of an order-5 B-tree containing N nodes? What
is the minimum height? What sequences of keys give the maximum height (that is,
give a general characterization of such sequences). What sequences of keys give the
minimum height?
9.3. Write a non-recursive version of the contains function for skip lists (9.4).
9.4. Dene an implementation of the SortedSet interface that uses a skip list
representation.
184 CHAPTER 9. BALANCED SEARCHING
Chapter 10
Concurrency and
Synchronization
An implicit assumption in everything weve done so far is that a single program is
modifying our data structures. In Java, one can have the eect of multiple programs
modifying an object, due to the existence of threads.
Although the language used to describe threads suggests that their purpose is
to allow several things to happen simultaneously, this is a somewhat misleading
impression. Even the smallest Java application running on Suns JDK platform, for
example, has ve threads, and thats only if the application has not created any
itself, and even if the machine on which the program runs consists of a single proces-
sor (which can only execute one instruction at a time). The four additional system
threads perform a number of tasks (such as nalizing objects that are no longer
reachable by the program) that are logically independent of the rest of the program.
Their actions could usefully occur at any time relative to the rest of the program.
Suns Java runtime system, in other words, is using threads as a organizational tool
for its system. Threads abound in Java programs that use graphical user interfaces
(GUIs). One thread draws or redraws the screen. Another responds to events such
as the clicking of a mouse button at some point on the screen. These are related,
but largely independent activities: objects must be redrawn, for example, whenever
a window becomes invisible and uncovers them, which happens independently of
any calculations the program is doing.
Threads violate our implicit assumption that a single program operates on our
data, so that even an otherwise perfectly implemented data structure, with all of
its instance variables private, can become corrupted in rather bizarre ways. The
existence of multiple threads operating on the same data objects also raises the
general problem of how these threads are to communicate with each other in an
orderly fashion.
185
186 CHAPTER 10. CONCURRENCY AND SYNCHRONIZATION
10.1 Synchronized Data Structures
Consider the ArrayList implementation from4.1. In the method ensureCapacity,
we nd
public void ensureCapacity (int N) {
if (N <= data.length)
return;
Object[] newData = new Object[N];
System.arraycopy (data, 0,
newData, 0, count);
data = newData;
}
public Object set (int k, Object x) {
check (k, count);
Object old = data[k];
data[k] = x;
return old;
}
Suppose one program executes ensureCapacity while another is executing set on
the same ArrayList object. We could see the following interleaving of their actions:
/* Program 1 executes: */ newData = new Object[N];
/* Program 1 executes: */ System.arraycopy (data, 0,
newData, 0, count);
/* Program 2 executes: */ data[k] = x;
/* Program 1 executes: */ data = newData;
Thus, we lose the value that Program 2 set, because it puts this value into the old
value of data after datas contents have been copied to the new, expanded array.
To solve the simple problem presented by ArrayList, threads can arrange to
access any particular ArrayList in mutual exclusionthat is, in such a way that
only one thread at a time operates on the object. Javas synchronized statement
provide mutual exclusion, allowing us to produce synchronized (or thread-safe) data
structures. Here is part of an example, showing both the use of the synchronized
method modier and equivalent use of the synchronized statement:
public class SyncArrayList<T> extends ArrayList<T> {
...
public void ensureCapacity (int n) {
synchronized (this) {
super.ensureCapacity (n);
}
}
public synchronized T set (int k, T x) {
return super.set (k, x);
}
The process of providing such wrapper functions for all methods of a List is
suciently tedious that the standard Java library class java.util.Collections
provides the following method:
10.2. MONITORS AND ORDERLY COMMUNICATION 187
/** A synchronized (thread-safe) view of the list L, in which only
* one thread at a time executes any method. To be effective,
* (a) there should be no subsequent direct use of L,
* and (b) the returned List must be synchronized upon
* during any iteration, as in
*
* List aList = Collections.synchronizedList(new ArrayList());
* ...
* synchronized(aList) {
* for (Iterator i = aList.iterator(); i.hasNext(); )
* foo(i.next());
* }
*/
public static List<T> synchronizedList (List L<T>) { ... }
Unfortunately, there is a time cost associated with synchronizing on every oper-
ation, which is why the Java library designers decided that Collection and most
of its subtypes would not be synchronized. On the other hand, StringBuffers and
Vectors are synchronized, and cannot be corrupted by simultaneous use.
10.2 Monitors and Orderly Communication
The objects returned by the synchronizedList method are examples of the sim-
plest kind of monitor. This term refers to an object (or type of object) that controls
(monitors) concurrent access to some data structure so as to make it work cor-
rectly. One function of a monitor is to provide mutually exclusive access to the
operations of the data structure, where needed. Another is to arrange for synchro-
nization between threadsso that one thread can wait until an object is ready
to provide it with some service.
Monitors are exemplied by one of the classic examples: the shared buer or
mailbox. A simple version of its public specication looks like this:
/** A container for a single message (an arbitrary Object). At any
* time, a SmallMailbox is either empty (containing no message) or
* full (containing one message). */
public class SmallMailbox {
/** When THIS is empty, set its current message to MESSAGE, making
* it full. */
public synchronized void deposit (Object message)
throws InterruptedException { ... }
/** When THIS is full, empty it and return its current message. */
public synchronized Object receive ()
throws InterruptedException { ... }
}
Since the specications suggest that either method might have to wait for a new
message to be deposited or an old one to be received, we specify both as possibly
188 CHAPTER 10. CONCURRENCY AND SYNCHRONIZATION
throwing an InterruptedException, which is the standard Java way to indicate
that while we were waiting, some other thread interrupted us.
The SmallMailbox specication illustrates the features of a typical monitor:
None of the modiable state variables (i.e., elds) are exposed.
Accesses from separate threads that make any reference to modiable state are
mutually excluded; only one thread at a time holds a lock on a SmallMailbox
object.
A thread may relinquish a lock temporarily and await notication of some
change. But changes in the ownership of a lock occur only at well-dened
points in the program.
The internal representation is simple:
private Object message;
private boolean amFull;
The implementations make use of the primitive Java features for waiting until
notied:
public synchronized void deposit (Object message)
throws InterruptedException
{
while (amFull)
wait (); // Same as this.wait ();
this.message = message; this.amFull = true;
notifyAll (); // Same as this.notifyAll ()
}
public synchronized Object receive ()
throws InterruptedException
{
while (! amFull)
wait ();
amFull = false;
notifyAll ();
return message;
}
The methods of SmallMailbox allow other threads in only at carefully controlled
points: the calls to wait. For example, the loop in deposit means If there is still
old unreceived mail, wait until some other thread to receives it and wakes me up
again (with notifyAll) and I have managed to lock this mailbox again. From
the point of view of a thread that is executing deposit or receive, each call to
wait has the eect of causing some change to the instance variables of thissome
change, that is, that could be eected by other calls deposit or receive.
10.3. MESSAGE PASSING 189
As long as the threads of a program are careful to protect all their data in
monitors in this fashion, they will avoid the sorts of bizarre interaction described at
the beginning of 10.1. Of course, there is no such thing as a free lunch; the use of
locking can lead to the situation known as deadlock in which two or more threads
wait for each other indenitely, as in this articial example:
class Communicate {
static SimpleMailbox
box1 = new SimpleMailbox (),
box2 = new SimpleMailbox ();
}
// Thread #1: | // Thread #2:
m1 = Communicate.box1.receive ();| m2 = Communicate.box2.receive ();
Communicate.box2.deposit (msg1); | Communicate.box1.deposit (msg2);
Since neither thread sends anything before trying to receive a message from its box,
both threads wait for each other (the problem could be solved by having one of the
two threads reverse the order in which it receives and deposits).
10.3 Message Passing
Monitors provide a disciplined way for multiple threads to access data without
stumbling over each other. Lurking behind the concept of monitor is a simple idea:
Thinking about multiple programs executing simultaneously is hard, so
dont do it! Instead, write a bunch of one-thread programs, and have
them exchange data with each other.
In the case of general monitors, exchanging data means setting variables that each
can see. If we take the idea further, we can instead dene exchanging data as
reading input and writing output. We get a concurrent programming discipline
called message passing.
In the message-passing world, threads are independent sequential programs than
send each other messages. They read and write messages using methods that cor-
respond to read on Java Readers, or print on Java PrintStreams. As a result,
one thread is aected by another only when it bothers to read its messages.
We can get the eect of message passing by writing our threads to perform all
interaction with each other by means of mailboxes. That is, the threads share some
set of mailboxes, but share no other modiable objects or variables (unmodiable
objects, like Strings, are ne to share).
Exercises
10.1. Give a possible implementation for the Collections.synchronizedList
static method in 10.1.
190 CHAPTER 10. CONCURRENCY AND SYNCHRONIZATION
Chapter 11
Pseudo-Random Sequences
Random sequences of numbers have a number of uses in simulation, game playing,
cryptography, and ecient algorithm development. The term random is rather
dicult to dene. For most of our purposes, we really dont need to answer the
deep philosophical questions, since our needs are generally served by sequences that
display certain statistical properties. This is a good thing, because truly ran-
dom sequences in the sense of unpredictable are dicult to obtain quickly, and
programmers generally resort, therefore, to pseudo-random sequences. These are
generated by some formula, and are therefore predictable in principle. Neverthe-
less, for many purposes, such sequences are acceptable, if they have the desired
statistics.
We commonly use sequences of integers or oating-point numbers that are uni-
formly distributed throughout some intervalthat is, if one picks a number (truly)
at random out of the sequence, the probability that it is in any set of numbers from
the interval is proportional to the size of that set. It is relatively easy to arrange
that a sequence of integers in some interval has this particular property: simply
enumerate a permutation of the integers in that interval over and over. Each inte-
ger is enumerated once per repetition, and so the sequence is uniformly distributed.
Of course, having described it like this, it becomes even more apparent that the
sequence is anything but random in the informal sense of this term. Neverthe-
less, when the interval of integers is large enough, and the permutation jumbled
enough, it is hard to tell the dierence. The rest of this Chapter will deal with
generating sequences of this sort.
11.1 Linear congruential generators
Perhaps the most common pseudo-random-number generators use the following re-
currence.
X
n
= (aX
n1
+c) mod m, (11.1)
where X
n
0 is the n
th
integer in the sequence, and a, m > 0 and c 0 are
integers. The seed value, X
0
, may be any value such that 0 X
0
< m. When m is
191
192 CHAPTER 11. PSEUDO-RANDOM SEQUENCES
a power of two, the X
n
are particularly easy to compute, as in the following Java
class.
/** A generator of pseudo-random numbers in the range 0 .. 2
31
1. */
class Random1 {
private int randomState;
static final int
a = ...,
c = ...;
Random1(int seed) { randomState = seed; }
int nextInt() {
randomState = (a * randomState + c) & 0x7fffffff;
return randomState;
}
}
Here, m is 2
31
. The & operation computes mod2
31
[why?]. The result can be any
non-negative integer. If we change the calculation of randomState to
randomState = a * randomState + c;
then the computation is implicitly done modulo 2
32
, and the results are integers in
the range 2
31
to 2
31
1.
The question to ask now is how to choose a and c appropriately. Considerable
analysis has been devoted to this question
1
. Here, Ill just summarize. I will restrict
the discussion to the common case of m = 2
w
, where w > 2 is typically the word
size of the machine (as in the Java code above). The following criteria for a and c
are desirable.
1. In order to get a sequence that has maximum periodthat is, which cycles
through all integers between 0 and m1 (or in our case, m/2 to m/21)it
is necessary and sucient that c and m be relatively prime (have no common
factors other than 1), and that a have the form 4k + 1 for some integer k.
2. A very low value of a is easily seen to be undesirable (the resulting sequence
will show a sort of sawtooth behavior). It is desirable for a to be reasonably
large relative to m (Knuth, for example, suggests a value between 0.01m and
0.99m) and have no obvious pattern to its binary digits.
3. It turns out that values of a that display low potency (dened as the minimal
value of s such that (a 1)
s
is divisible by m) are not good. Since a 1 must
1
For details, see D. E. Knuth, Seminumerical Algorithms (The Art of Computer Programming,
volume 2), second edition, Addison-Wesley, 1981.
11.2. ADDITIVE GENERATORS 193
be divisible by 4, (see item 1 above), the best we can do is to insure that
(a 1)/4 is not eventhat is, a mod 8 = 5.
4. Under the conditions above, c = 1 is a suitable value.
5. Finally, although most arbitrarily-chosen values of a satisfying the above con-
ditions work reasonably well, it is generally preferable to apply various statis-
tical tests (see Knuth) just to make sure.
For example, when m = 2
32
, some good choices for a are 1566083941 (which Knuth
credits to Waterman) and 1664525 (credited to Lavaux and Janssens).
There are also bad choices of parameters, of which the most famous is one that
was part of the IBM FORTRAN library for some timeRANDU, which had m = 2
31
,
X
0
odd, c = 0, and a = 65539. This does not have maximum period, of course (it
skips all even numbers). Moreover, if you take the numbers three at a time and
consider them as points in space, the set of points is restricted to a relatively few
widely-spaced planesstrikingly bad behavior.
The Java library has a class java.util.Random similar to Random1. It takes
m = 2
48
, a = 25214903917, and c = 11 to generate long quantities in the range 0 to
2
48
1, which doesnt quite satisfy Knuths criterion 2. I havent checked to see how
good it is. There are two ways to initialize a Random: either with a specic seed
value, or with the current value of the system timer (which on UNIX systems gives
a number of milliseconds since some time in 1970)a fairly common way to get an
unpredictable starting value. Its important to have both: for games or encryption,
unpredictability is useful. The rst constructor, however, is also important because
it makes it possible to reproduce results.
11.2 Additive Generators
One can get very long periods, and avoid multiplications (which can be a little
expensive for Java long quantities) by computing each successive output, X
n
, as a
sum of selected of previous outputs: X
nk
for several xed values of k. Heres an
instance of this scheme that apparently works quite well
2
:
X
n
= (X
n24
+X
n55
) mod m, for n 55 (11.2)
where m = 2
e
for some e. We initially choose some random seed values for X
0
to
X
54
. This has a large period of 2
f
(2
55
1) for some 0 f < e. That is, although
numbers it produces must repeat before then (since there are only 2
e
of them, and
e is typically something like 32), they wont repeat in the same pattern.
Implementing this scheme gives us another nice opportunity to illustrate the
circular buer (see 4.5). Keep your eye on the array state in the following:
2
Knuth credits this to unpublished work of G. J. Mitchell and D. P. Moore in 1958.
194 CHAPTER 11. PSEUDO-RANDOM SEQUENCES
class Random2 {
/** state[k] will hold X
k
, X
k+55
, X
k+110
, . . . */
private int[] state = new int[55];
/** nm will hold n mod 55 after each call to nextInt.
* Initially n = 55. */
private int nm;
public Random2(...) {
initialize state[0..54] to values for X
0
to X
54
;
nm = -1;
}
public int nextInt() {
nm = mod55(nm + 1);
int k24 = mod55(nm - 24);
// Now state[nm] is X
n55
and state[k24] is X
n24
.
return state[nm] += state[k24];
// Now state[nm] (just returned) represents X
n
.
}
private int mod55 (int x) {
return (x >= 55) ? x - 55 : (x < 0) ? x + 55 : x;
}
}
Other values than 24 and 55 will also produce pseudo-random streams with good
characteristics. See Knuth.
11.3 Other distributions
11.3.1 Changing the range
The linear congruential generators above give us pseudo-random numbers in some
xed range. Typically, we are really interested in some other, smaller, range of
numbers instead. Lets rst consider the case where we want a sequence, Y
i
, of
integers uniformly distributed in a range 0 to m

1 and are given pseudo-random


integers, X
i
, in the range 0 to m1, with m > m

. A possible transformation is
Y
i
=
m

m
X
i
|,
which results in numbers that are reasonably evenly distributed as long as m m

.
From this, it is easy to get a sequence of pseudo-random integers evenly distributed
in the range L Y

i
< U:
Y

i
=
U L
m
X
i
|.
11.3. OTHER DISTRIBUTIONS 195
It might seem that
Y
i
= X
i
mod m

(11.3)
is a more obvious formula for Y
i
. However, it has problems when m

is a small
power of two and we are using a linear congruential generator as in Equation 11.1,
with m a power of 2. For such a generator, the last k bits of X
i
have a period of 2
k
[why?], and thus so will Y
i
. Equation 11.3 works much better if m

is not a power
of 2.
The nextInt method in the class java.util.Random produces its 32-bit result
from a 48-bit state by dividing by 2
16
(shifting right by 16 binary places), which gets
converted to an int in the range 2
31
to 2
31
1. The nextLong method produces
a 64-bit result by calling nextInt twice:
(nextInt() << 32L) + nextInt();
11.3.2 Non-uniform distributions
So far, we have discussed only uniform distributions. Sometimes that isnt what we
want. In general, assume that we want to pick a number Y in some range u
l
to u
h
so that
3
Pr[Y y] = P(y),
where P is the desired distribution functionthat is, it is a non-decreasing function
with P(y) = 0 for y < u
l
and P(y) = 1 for y u
h
. The idea of what we must
do is illustrated in Figure 11.1, which shows a graph of a distribution P. The key
observation is that the desired probability of Y being no greater than y
0
, P(y
0
),
is the same as the probability that a uniformly distributed random number X on
the interval 0 to 1, is less than P(y
0
). Suppose, therefore, that we had an inverse
function P
1
so that P(P
1
(x)) = x. Then,
Pr[P
1
(X) y] = Pr[X P(y)] = P(y)
In other words, we can dene
Y = P
1
(X)
as the desired random variable.
All of this is straightforward when P is strictly increasing. However, we have
to exercise care when P is not invertible, which happens when P does not strictly
increase (i.e., it has plateaus where its value does not change). If P(y) has a
constant value between y
0
and y
1
, this means that the probability that Y falls
between these two values is 0. Therefore, we can uniquely dene P
1
(x) as the
smallest y such that P(y) x.
Unfortunately, inverting a continuous distribution (that is, in which Y ranges
at least ideallyover some interval of real numbers) is not always easy to do. There
are various tricks; as usual, the interested reader is referred to Knuth for details. In
particular, Java uses one of his algorithms (the polar method of Box, Muller, and
3
The notation Pr[E] means the probability that situation E (called an event) is true.
196 CHAPTER 11. PSEUDO-RANDOM SEQUENCES
1
0
y
y
0
P(y
0
)
P(y)
Figure 11.1: A typical non-uniform distribution, illustrating how to convert a uni-
formly distributed random variable into one governed by an arbitrary distribution,
P(y). The probability that y is less than y
0
is the same as the probability that a
uniformly distributed random variable on the interval 0 to 1 is less than or equal to
P(y
0
).
Marsaglia) to implement the nextGaussian method in java.util.Random, which
returns normally distributed values (i.e., the bell curve density) with a mean value
of 0 and standard deviation of 1.
11.3.3 Finite distributions
There is a simpler common case: that in which Y is to range over a nite set
say the integers from 0 to u, inclusive. We are typically given the probabilities
p
i
= Pr[Y = i]. In the interesting case, the distribution is not uniform, and hence
the p
i
are not necessarily all 1/(u +1). The relationship between these p
i
and P(i)
is
P(i) = Pr[Y i] =

0ki
p
k
.
The obvious technique for computing the inverse P
1
is to perform a lookup
on a table representing the distribution P. To compute a random i satisfying the
desired conditions, we choose a random X in the range 01, and return the rst i
such that X P(i). This works because we return i i P(i1) < X P(i) (taking
P(1) = 0). The distance between P(i 1) and P(i) is p
i
, and since X is uniformly
distributed across 0 to 1, the probability of getting a point in this interval is equal
to the size of the interval, p
i
.
For example, if 1/12 of the time we want to return 0, 1/2 the time we want to
return 1, 1/3 of the time we want to return 2, and 1/12 of the time we want to
return 3, we return the index of the rst element of table PT that does not exceed
a random X chosen uniformly on the interval 0 to 1, where PT is dened to have
PT[0] = 1/12, PT[1] = 7/12, PT[2] = 11/12, and PT[3] = 1.
Oddly enough, there is a faster way of doing this computation for large u, dis-
covered by A. J. Walker
4
. Imagine the numbers between 0 and u as labels on u +1
4
Knuths citations are Electronics Letters 10, 8 (1974), 127128 and ACM Transactions on
11.3. OTHER DISTRIBUTIONS 197
beakers, each of which can contain 1/(u + 1) units of liquid. Imagine further that
we have u+1 vials of colored liquids, also numbered 0 to u, each of a dierent color
and all immiscible in each other; well use the integer i as the name of the color in
vial number i. The total amount of liquid in all the vials is 1 unit, but the vials may
contain dierent amounts. These amounts correspond to the desired probabilities
of picking the numbers 0 through u + 1.
Suppose that we can distribute the liquid from the vials to the beakers so that
Beaker number i contains two colors of liquid (the quantity of one of the colors,
however, may be 0), and
One of the colors of liquid in beaker i is color number i.
Then we can pick a number from 0 to u with the desired probabilities by the
following procedure.
Pick a random oating-point number, X, uniformly in the range 0 X < u+1.
Let K be the integer part of this number and F the fractional part, so that
K +F = X, F < 1, and K, F 0.
If the amount of liquid of color K in beaker K is greater than or equal to F,
then return K. Otherwise return the number of the other color in beaker K.
A little thought should convince you that the probability of picking color i under
this scheme is proportional to the amount of liquid of color i. The number K
represents a randomly-chosen beaker, and F represents a randomly-chosen point
along the side of that beaker. We choose the color we nd at this randomly chosen
point. We can represent this selection process with two tables indexed by K: Y
K
is the color of the other liquid in beaker K (i.e., besides color K itself), and H
K
is
the height of the liquid with color K in beaker K (as a fraction of the distance to
the top gradation of the beaker).
For example, consider the probabilities given previously; an appropriate distribu-
tion of liquid is illustrated in Figure 11.2. The tables corresponding to this gure are
Y = [1, 2, , 1] (Y
2
doesnt matter in this case), and H = [0.3333, 0.6667, 1.0, 0.3333].
The only remaining problem is perform the distribution of liquids to beakers,
for which the following procedure suces (in outline):
Mathematical Software, 3 (1976), 253256.
198 CHAPTER 11. PSEUDO-RANDOM SEQUENCES
0 1 2 3
Legend:
0:
1:
2:
3:
Figure 11.2: An example dividing probabilities (colored liquids) into beakers. Each
beaker holds 1/4 unit of liquid. There is 1/12 unit of 0-colored liquid, 1/2 unit of
1-colored liquid, 1/3 unit of 2-colored liquid, and 1/12 unit of 3-colored liquid.
/** S is a set of integers that are the names of beakers and
* vial colors. Assumes that all the beakers named in S are
* empty and have equal capacity, and the total contents of the vials
* named in S is equal to the total capacity of the beakers in
* S. Fills the beakers in S from the vials in V so that
* each beaker contains liquid from no more than two vials and the
* beaker named s contains liquid of color s. */
void fillBeakers(SetOfIntegers S)
{
if (S is empty)
return;
v
0
= the color of a vial in S with the least liquid;
Pour the contents of vial v
0
into beaker v
0
;
/* The contents must fit in the beaker, because since v
0
* contains the least fluid, it must have no more than the
* capacity of a single beaker. Vial v
0
is now empty. */
v
1
= the color of a vial in S with the most liquid;
Fill beaker v
0
the rest of the way from vial v
1
;
/* If [S[ = 1 so that v
0
= v
1
, this is the null operation.
* Otherwise, v
0
,= v
1
and vial v
1
must contain at
* least as much liquid as each beaker can contain. Thus, beaker
* v
0
is filled by this step. (NOTE: [S[ is the
* cardinality of S.) */
fillBeakers(S v
0
);
}
The action of pouring the contents of vial v
0
into beaker v
0
corresponds to setting
H
v
0
to the ratio between the amount of liquid in vial v
0
and the capacity of beaker
v
0
. The action of lling beaker v
0
the rest of the way from vial v
1
corresponds to
setting Y
v
0
to v
1
.
11.4. RANDOM PERMUTATIONS AND COMBINATIONS 199
11.4 Random permutations and combinations
Given a set of N values, consider the problem of selecting a random sequence without
replacement of length M from the set. That is, we want a random sequence of M
values from among these N, where each value occurs in the sequence no more than
once. By random sequence we mean that all possible sequences are equally likely
5
.
If we assume that the original values are stored in an array, then the following is a
very simple way of obtaining such a sequence.
/** Permute A so as to randomly select M of its elements,
* placing them in A[0] .. A[M-1], using R as a source of
* random numbers. */
static void selectRandomSequence(SomeType[] A, int M, Random1 R)
{
int N = A.length;
for (int i = 0; i < M; i += 1)
swap(A, i, R.randInt(i,N-1));
}
Here, we assume swap(V,j,k) exchanges the values of V[j] and V[k].
For example, if DECK[0] is A, DECK[1] is 2,. . . , and DECK[51] is K, then
selectRandomSequence(DECK, 52, new Random());
shues the deck of cards.
This technique works, but if M N, it is not a terribly ecient use of space, at
least when the contents of the array A is something simple, like the integers between
0 and N 1. For that case, we can better use some algorithms due to Floyd (names
of types and functions are meant to make them self-explanatory).
5
Here, Ill assume that the original set contains no duplicate values. If it does, then we have to
treat the duplicates as if they were all dierent. In particular, if there are k duplicates of a value
in the original set, it may appear up to k times in the selected sequence.
200 CHAPTER 11. PSEUDO-RANDOM SEQUENCES
/** Returns a random sequence of M distinct integers from 0..N-1,
* with all possible sequences equally likely. Assumes 0<=M<=N. */
static SequenceOfIntegers selectRandomIntegers(int N, int M, Random1 R)
{
SequenceOfIntegers S = new SequenceOfIntegers();
for (int i = N-M; i < N; i += 1) {
int s = R.randInt(0, i);
if (s S)
insert i into S after s;
else
prefix s to the front of S;
}
return S;
}
This procedure produces all possible sequences with equal probability because ev-
ery possible sequence of values for s generates a distinct value of S, and all such
sequences are equally probable.
Sanity check: the number of ways to select a sequence of M objects from a set
of N objects is
N!
(N M)!
and the number of possible sequences of values for s is equal to the number of possi-
ble values of R.randInt(0,N-M) times the number of possible values of R.randInt(0,N-M-1),
etc., which is
(N M + 1)(N M + 2) N =
N!
(N M)!
.
By replacing the SequenceOfIntegers with a set of integers, and replacing
prex and insert with simply adding to a set, we get an algorithm for selecting
combinations of M numbers from the rst N integers (i.e., where order doesnt
matter).
The Java standard library provides two static methods in the class java.util.Collections
for randomly permuting an arbitrary List:
/** Permute L, using R as a source of randomness. As a result,
* calling shuffle twice with values of R that produce identical
* sequences will give identical permutations. */
public static void shuffle (List<?> L, Random r) { }
/** Same as shuffle (L, D), where D is a default Random value. */
public static void shuffle (List L<?>) { }
This takes linear time if the list supports fast random access.
Chapter 12
Graphs
When the term is used in computer science, a graph is a data structure that rep-
resents a mathematical relation. It consists of a set of vertices (or nodes) and a
set of edges, which are pairs of vertices
1
. These edge pairs may be unordered, in
which case we have an undirected graph, or they may be ordered, in which case we
have a directed graph (or digraph) in which each edge leaves, exits, or is out of one
vertex and enters or is into the other. For vertices v and w we denote a general
edge between v and w as (v, w), or v, w if we specically want to indicate an
undirected edge, or [v, w] if we specically want to indicate a directed edge that
leaves v and enters w. An edge (v, w) is said to be incident on its two ends, v and
w; if (v, w) is undirected, we say that v and w are adjacent vertices. The degree of
a vertex is the number of edges incident on it. For a directed graph, the in-degree
is the number of edges that enter it and the out-degree is the number that leave.
Usually, the ends of an edge will be distinct; that is, there will be no reexive edges
from a vertex to itself.
A subgraph of a graph G is simply a graph whose vertices and edges are subsets
of the vertices and edges of G.
A path of length k 0 in a graph from vertex v to vertex v

is a sequence of
vertices v
0
, v
1
, . . . , v
k1
with v = v
0
, v

= v
k1
with all the (v
i
, v
i+1
) edges being in
the graph. This denition applies both to directed and undirected graphs; in the
case of directed graphs, the path has a direction. The path is simple if there are
no repetitions of vertices in it. It is a cycle if k > 1 and v = v

, and a simple cycle


if v
0
, . . . , v
k2
are distinct; in an undirected graph, a cycle must additionally not
follow the same edge twice. A graph with no cycles is called acyclic.
If there is a path from v to v

, then v

is said to be reachable from v. In an


undirected graph, a connected component is a set of vertices from the graph and all
edges incident on those vertices such that each vertex is reachable from any given
vertex, and no other vertex from the graph is reachable from any vertex in the set.
An undirected graph is connected if it contains exactly one connected component
(containing all vertices in the graph).
1
Denitions in this section are taken from Tarjan, Data Structures and Network Algorithms,
SIAM, 1983.
201
202 CHAPTER 12. GRAPHS
0
1 2
3

4 5
6
7 8
9
Figure 12.1: An undirected graph. The starred edge is incident on vertices 1 and 2.
Vertex 4 has degree 0; 3, 7, 8, and 9 have degree 1; 1, 2 and 6 have degree 2; and 0
and 5 have degree 3. The dashed lines surround the connected components; since
there is more than one, the graph is unconnected. The sequence [2,1,0,3] is a path
from vertex 2 to vertex 3. The path [2,1,0,2] is a cycle. The only path involving
vertex 4 is the 0-length path [4]. The rightmost connected component is acyclic,
and is therefore a free tree.
In a directed graph, the connected components contain the same sets of vertices
that you would get by replacing all directed edges by undirected ones. A subgraph
of a directed graph in which every vertex can be reached from every other is called
a strongly connected component. Figures 12.1 and 12.2 illustrate these denitions.
A free tree is a connected, undirected, acyclic graph (which implies that there
is exactly one simple path from any node to any other). An undirected graph is
biconnected if there are at least two simple paths between any two nodes.
For some applications, we associate information with the edges of a graph. For
example, if vertices represent cities and edges represent roads, we might wish to
associate distances with the edges. Or if vertices represent pumping stations and
edges represent pipelines, we might wish to associate capacities with the edges.
Well call numeric information of this sort weights.
12.1 A Programmers Specication
There isnt an obvious single class specication that one might give for programs
dealing with graphs, because variations in what various algorithms need can have a
profound eect on the appropriate representations and what operations those rep-
resentations conveniently support. For instructional use, however, Figure 12.3 gives
a sample one-size-ts-all abstraction for general directed graphs, and Figure 12.4
does the same for undirected graphs. The idea is that vertices and edges are identi-
ed by non-negative integers. Any additional data that one wants to associate with
a vertex or edgesuch as a more informative label or a weightcan be added on
the side in the form of additional arrays indexed by vertex or edge number.
12.2. REPRESENTING GRAPHS 203
0
1 2
3
4
5
6 7
8
Figure 12.2: A directed graph. The dashed circles show connected components.
Nodes 5, 6 and 7 form a strongly connected component. The other strongly con-
nected components are the remaining individual nodes. The left component is
acyclic. Nodes 0 and 4 have an in-degree of 0; nodes 1, 2, and 58 have an in-degree
of 1; and node 3 has an in-degree of 3. Nodes 3 and 8 have out-degrees of 0; 1, 2,
4, 5, and 7 have out-degrees of 1; and 0 and 6 have out-degrees of 2.
12.2 Representing graphs
Graphs have numerous representations, all tailored to the operations that are critical
to some application.
12.2.1 Adjacency Lists
If the operations succ, pred, leaving, and entering for directed graphs are impor-
tant to ones problem (or incident and adjacent for undirected graphs), then it
may be convenient to associate a list of predecessors, successors, or neighbors with
each vertexan adjacency list. There are many ways to represent such thingsas
a linked list, for example. Figure 12.5 shows a method that uses arrays in such a
way that to allow a programmer both to sequence easily through the neighbors of
a directed graph, and to sequence through the set of all edges. Ive included only a
couple of indicative operations to show how the data structure works. It is essen-
tially a set of linked list structures implemented with arrays and integers instead of
objects containing pointers. Figure 12.6 shows an example of a particular directed
graph and the data structures that would represent it.
Another variation on essentially the same structure is to introduce separate
types for vertices and edges. Vertices and Edges would then contain elds such as
204 CHAPTER 12. GRAPHS
/** A general directed graph. For any given concrete extension of this
* class, a different subset of the operations listed will work. For
* uniformity, we take all vertices to be numbered with integers
* between 0 and N-1. */
public interface Digraph {
/** Number of vertices. Vertices are labeled 0 .. numVertices()-1. */
int numVertices();
/** Number of edges. Edges are numbered 0 .. numEdges()-1. */
int numEdges();
/** The vertices that edge E leaves and enters. */
int leaves(int e);
int enters(int e);
/** True iff [v0,v1] is an edge in this graph. */
boolean isEdge(int v0, int v1);
/** The out-degree and in-degree of vertex #V. */
int outDegree(int v);
int inDegree(int v);
/** The number of the Kth edge leaving vertex V, 0<=K<outDegree(V). */
int leaving(int v, int k);
/** The number of the Kth edge entering vertex V, 0<=K<inDegree(V). */
int entering(int v, int k);
/** The Kth successor of vertex V, 0<=K<outDegree(V). It is intended
* that succ(v,k) = enters(leaving(v,k)). */
int succ(int v, int k);
/** The Kth predecessor of vertex V, 0<=K<inDegree(V). It is intended
* that pred(v,k) = leaves(entering(v,k)). */
int pred(int v, int k);
/** Add M initially unconnected vertices to this graph. */
void addVertices(int M);
/** Add an edge from V0 to V1. */
void addEdge(int v0, int v1);
/** Remove all edges incident on vertex V from this graph. */
void removeEdges(int v);
/** Remove edge (v0, v1) from this graph */
void removeEdge(int v0, int v1);
}
Figure 12.3: A sample abstract directed-graph interface in Java.
12.2. REPRESENTING GRAPHS 205
/** A general undirected graph. For any given concrete extension of
* this class, a different subset of the operations listed will work.
* For uniformity, we take all vertices to be numbered with integers
* between 0 and N-1. */
public interface Graph {
/** Number of vertices. Vertices are labeled 0 .. numVertices()-1. */
int numVertices();
/** Number of edges. Edges are numbered 0 .. numEdges()-1. */
int numEdges();
/** The vertices on which edge E is incident. node0 is the
* smaller-numbered vertex. */
int node0(int e);
int node1(int e);
/** True iff vertices V0 and V1 are adjacent. */
boolean isEdge(int v0, int v1);
/** The number of edges incident on vertex #V. */
int degree(int v);
/** The number of the Kth edge incident on V, 0<=k<degree(V). */
int incident(int v, int k);
/** The Kth node adjacent to V, 0<=K<outDegree(V). It is
* intended that adjacent(v,k) = either node0(incident(v,k))
* or node1(incident(v,k)). */
int adjacent(int v, int k);
/** Add M initially unconnected vertices to this graph. */
void addVertices(int M);
/** Add an (undirected) edge between V0 and V1. */
void addEdge(int v0, int v1);
/** Remove all edges involving vertex V from this graph. */
void removeEdges(int v);
/** Remove the (undirected) edge (v0, v1) from this graph. */
void removeEdge(int v0, int v1);
}
Figure 12.4: A sample abstract undirected-graph class.
206 CHAPTER 12. GRAPHS
/** A digraph */
public class AdjGraph implements Digraph {
/** A new Digraph with N unconnected vertices */
public AdjGraph(int N) {
numVertices = N; numEdges = 0;
enters = new int[N*N]; leaves = new int[N*N];
nextOutEdge = new int[N*N]; nextInEdge = new int[N*N];
edgeOut0 = new int[N]; edgeIn0 = new int[N];
}
/** The vertices that edge E leaves and enters. */
public int leaves(int e) { return leaves[e]; }
public int enters(int e) { return enters[e]; }
/** Add an edge from V0 to V1. */
public void addEdge(int v0, int v1) {
if (numEdges >= enters.length)
expandEdges(); // Expand all edge-indexed arrays
enters[numEdges] = v1; leaves[numEdges] = v0;
nextInEdge[numEdges] = edgeIn0[v1];
edgeIn0[v1] = numEdges;
nextOutEdge[numEdges] = edgeOut0[v0];
edgeOut0[v0] = numEdges;
numEdges += 1;
}
Figure 12.5: Adjacency-list implementation for a directed graph. Only a few repre-
sentative operations are shown.
12.2. REPRESENTING GRAPHS 207
/** The number of the Kth edge leaving vertex V, 0<=K<outDegree(V). */
public int leaving(int v, int k) {
int e;
for (e = edgeOut0[v]; k > 0; k -= 1)
e = nextOutEdge[e];
return e;
}

/* Private section */
private int numVertices, numEdges;
/* The following are indexed by edge number */
private int[]
enters, leaves,
nextOutEdge, /* The # of sibling outgoing edge, or -1 */
nextInEdge; /* The # of sibling incoming edge, or -1 */
/* edgeOut0[v] is # of first edge leaving v, or -1. */
private int[] edgeOut0;
/* edgeIn0[v] is # of first edge entering v, or -1. */
private int[] edgeIn0;
}
Figure 12.5, continued.
208 CHAPTER 12. GRAPHS
D
A
B
G E
F C
H
edgeOut0 1 3 5 11 1 8 9 12
edgeIn0 1 12 10 9 7 3 5 11
A B C D E F G H
nextOutEdge 1 0 1 2 1 1 1 6 1 1 4 7 10
nextInEdge 1 1 1 1 1 4 1 1 2 8 6 1 0
enters 1 3 3 5 6 6 2 4 3 3 2 7 1
leaves 0 0 1 1 7 2 3 3 5 6 7 3 7
0 1 2 3 4 5 6 7 8 9 10 11 12
Figure 12.6: A graph and one form of adjacency list representation. The lists in
this case are arrays. The lower four arrays are indexed by edge number, and the
rst two by vertex number. The array nextOutEdge forms linked lists of out-going
edges for each vertex, with roots in edgeOut0. Likewise, nextInEdge and edgeIn0
form linked lists of incoming edges for each vertex. The enters and leaves arrays
give the incident vertices for each edge.
class Vertex {
private int num; /* Number of this vertex */
private Edge edgeOut0, edgeIn0; /* First outgoing & incoming edges. */
...
}
class Edge {
private int num; /* Number of this edge */
private Vertex enters, leaves;
private Edge nextOutEdge, nextInEdge;
}
12.2.2 Edge sets
If all we need to do is enumerate the edges and tell what nodes they are incident
on, we can simplify the representation in 12.2.1 quite a bit by throwing out elds
edgeOut0, edgeIn0, nextOutEdge, and nextInEdge. We will see one algorithm
where this is useful.
12.2. REPRESENTING GRAPHS 209
D
A
B
G E
F C
H
M =
A B C D E F G H
A 0 1 0 1 0 0 0 0
B 0 0 0 1 0 1 0 0
C 0 0 0 0 0 0 1 0
D 0 0 1 0 1 0 0 1
E 0 0 0 0 0 0 0 0
F 0 0 0 1 0 0 0 0
G 0 0 0 1 0 0 0 0
H 0 1 1 0 0 0 1 0
D
A
B
G E
F C
H
M

=
A B C D E F G H
A 0 1 0 1 0 0 0 0
B 1 0 0 1 0 1 0 1
C 0 0 0 1 0 0 1 1
D 1 1 1 0 1 1 1 1
E 0 0 0 1 0 0 0 0
F 0 1 0 1 0 0 0 0
G 0 0 1 1 0 0 0 1
H 0 1 1 1 0 0 1 0
Figure 12.7: Top: a directed graph and corresponding adjacency matrix. Bottom:
an undirected variant of the graph and adjacency matrix.
12.2.3 Adjacency matrices
If ones graphs are dense (many of the possibly edges exist) and if the important
operations include Is there an edge from v to w? or The weight of the edge
between v and w, then we can use an adjacency matrix. We number the vertices 0
to [V [ 1 (where [V [ is the size of the set V of vertices), and then set up a [V [ [V [
matrix with entry (i, j) equal to 1 if there is an edge from the vertex numbers i to
the one numbered j and 0 otherwise. For weighted edges, we can let entry (i, j) be
the weight of the edge between i and j, or some special value if there is no edge
(this would be an extension of the specications of Figure 12.3). When a graph is
undirected, the matrix will be symmetric. Figure 12.7 illustrates two unweighted
graphsdirected and undirectedand their corresponding adjacency matrices.
Adjacency matrices for unweighted graphs have a rather interesting property.
Take, for example, the top matrix in Figure 12.7, and consider the result of mul-
tiplying this matrix by itself. We dene the product of any matrix X with itself
as
(X X)
ij
=

0k<|V |
X
ik
X
kj
.
210 CHAPTER 12. GRAPHS
For the example in question, we get
M
2
=
A B C D E F G H
A 0 0 1 1 1 1 0 1
B 0 0 1 1 1 0 0 1
C 0 0 0 1 0 0 0 0
D 0 1 1 0 0 0 2 0
E 0 0 0 0 0 0 0 0
F 0 0 1 0 1 0 0 1
G 0 0 1 0 1 0 0 1
H 0 0 0 2 0 1 1 0
M
3
=
A B C D E F G H
A 0 1 2 1 1 0 2 1
B 0 1 2 0 1 0 2 1
C 0 0 1 0 1 0 0 1
D 0 0 0 3 0 1 1 0
E 0 0 0 0 0 0 0 0
F 0 1 1 0 0 0 2 0
G 0 1 1 0 0 0 2 0
H 0 0 2 2 2 0 0 2
Translating this, we see that (M M)
ij
is equal to the number of vertices, k, such
that there is an edge from vertex i to vertex k (M
ik
= 1) and there is also an edge
from vertex k to vertex j (M
kj
= 1). For any other vertex, one of M
ik
or M
kj
will
be 0. It should be easy to see, therefore, that M
2
ij
is the number of paths following
exactly two edges from i to j. Likewise, M
3
ij
represents the number of paths that
are exactly three edges long between i and j. If we use boolean arithmetic instead
(where 0 +1 = 1 +1 = 1), we instead get 1s in all positions where there is at least
one path of length exactly two between two vertices.
Adjacency matrices are not good for sparse graphs (those where the number
of edges is much smaller than V
2
). It should be obvious also that they present
problems when one wants to add and subtract vertices dynamically.
12.3 Graph Algorithms
Many interesting graph algorithms involve some sort of traversal of the vertices or
edges of a graph. Exactly as for trees, one can traverse a graph in either depth-
rst or breadth-rst fashion (intuitively, walking away from the starting vertex as
quickly or as slowly as possible).
12.3.1 Marking.
However, in graphs, unlike trees, one can get back to a vertex by following edges
away from it, making it necessary to keep track of what vertices have already been
visited, an operation Ill call marking the vertices. There are several ways to ac-
complish this.
Mark bits. If vertices are represented by objects, as in the class Vertex illustrated
in 12.2.1, we can keep a bit in each vertex that indicates whether the vertex
has been visited. These bits must initially all be on (or o) and are then
ipped when a vertex is rst visited. Similarly, we could do this for edges
instead.
Mark counts. A problem with mark bits is that one must be sure they are all set
the same way at the beginning of a traversal. If traversals may get cut short,
12.3. GRAPH ALGORITHMS 211
causing mark bits to have arbitrary settings after a traversal, one may be
able to use a larger mark instead. Give each traversal a number in increasing
sequence (the rst traversal is number 1, the second is 2, etc.). To visit a
node, set its mark count to the current traversal number. Each new traversal
is guaranteed to have a number contained in none of the mark elds (assuming
the mark elds are initialized appropriately, say to 0).
Bit vectors. If, as in our abstractions, vertices have numbers, one can keep a bit
vector, M, on the side, where M[i] is 1 i vertex number i has been visited.
Bit vectors are easy to reset at the beginning of a traversal.
Ad hoc. Sometimes, the particular traversal being performed provides a way of
recognizing a visited vertex. One cant say anything general about this, of
course.
12.3.2 A general traversal schema.
Many graph algorithms have the following general form. Italicized capital-letter
names must be replaced according to the application.
/* GENERAL GRAPH-TRAVERSAL SCHEMA */
COLLECTION OF VERTICES fringe;
fringe = INITIAL COLLECTION;
while (! fringe.isEmpty()) {
Vertex v = fringe.REMOVE HIGHEST PRIORITY ITEM();
if (! MARKED(v)) {
MARK(v);
VISIT(v);
For each edge (v,w) {
if (NEEDS PROCESSING(w))
Add w to fringe;
}
}
}
In the following sections, we look at various algorithms that t this schema
2
.
2
In this context, a schema (plural schemas or schemata) is a template, containing some pieces
that must be replaced. Logical systems, for example, often contain axiom schemata such as
(xP(x)) P(y),
where P may be replaced by any logical formula with a distinguished free variable (well, roughly).
212 CHAPTER 12. GRAPHS
12.3.3 Generic depth-rst and breadth-rst traversal
Depth-rst traversal in graphs is essentially the same as in trees, with the exception
of the check for already visited. To implement
/** Perform the operation VISIT on each vertex reachable from V
* in depth-first order. */
void depthFirstVisit(Vertex v)
we use the general graph-traversal schema with the following replacements.
COLLECTION OF VERTICES is a stack type.
INITIAL COLLECTION is the set v.
REMOVE HIGHEST PRIORITY ITEM pops and returns the top.
MARK and MARKED set and check a mark bit (see discussion above).
NEEDS PROCESSING means not MARKED.
Here, as is often the case, we could dispense with NEEDS PROCESSING (make
it always TRUE). The only eect would be to increase the size of the stack some-
what.
Breadth-rst search is nearly identical. The only dierences are as follows.
COLLECTION OF VERTICES is a (FIFO) queue type.
REMOVE HIGHEST PRIORITY ITEM is to remove and return the rst (least-
recently-added) item in the queue.
12.3.4 Topological sorting.
A topological sort of a directed graph is a listing of its vertices in such an order
that if vertex w is reachable from vertex v, then w is listed after v. Thus, if we
think of a graph as representing an ordering relation on the vertices, a topological
sort is a linear ordering of the vertices that is consistent with that ordering rela-
tion. A cyclic directed graph has no topological sort. For example, topological sort
is the operation that the UNIX make utility implicitly performs to nd an order
for executing commands that brings every le up to date before it is needed in a
subsequent command.
To perform a topological sort, we associate a count with each vertex of the
number of incoming edges from as-yet unprocessed vertices. For the version below,
I use an array to keep these counts. The algorithm for topological sort now looks
like this.
12.3. GRAPH ALGORITHMS 213
/** An array of the vertices in G in topologically sorted order.
* Assumes G is acyclic. */
static int[] topologicalSort(Digraph G)
{
int[] count = new int[G.numVertices()];
int[] result = new int[G.numVertices()];
int k;
for (int v = 0; v < G.numVertices(); v += 1)
count[v] = G.inDegree(v);
Graph-traversal schema replacement for topological sorting;
return result;
}
The schema replacement for topological sorting is as follows.
COLLECTION OF VERTICES can be any set, multiset, list, or sequence type for
vertices (stacks, queues, etc., etc.).
INITIAL COLLECTION is the set of all v with count[v]=0.
REMOVE HIGHEST PRIORITY ITEM can remove any item.
MARKED and MARK can be trivial (i.e., always return FALSE and do nothing,
respectively).
VISIT(v) makes v the next non-null element of result and decrements count[w]
for each edge (v,w) in G.
NEEDS PROCESSING is true if count[w]==0.
Figure 12.8 illustrates the algorithm.
12.3.5 Minimum spanning trees
Consider a connected undirected graph with edge weights. A minimum(-weight)
spanning tree (or MST for short) is a tree that is a subgraph of the given graph,
contains all the vertices of the given graph, and minimizes the sum of its edge
weights. For example, we might have a bunch of cities that we wish to connect
up with telephone lines so as to provide a path between any two, all at minimal
cost. The cities correspond to vertices and the possible connections between cities
correspond to edges
3
. Finding a minimal set of possible connections is the same
as nding a minimum spanning tree (there can be more than one). To do this, we
make use of a useful fact about MSTs.
3
It turns out that to get really lowest costs, you want to introduce strategically placed extra
cities to serve as connecting points. Well ignore that here.
214 CHAPTER 12. GRAPHS
A
0
B
1
C
0
D
2
E
3
F
1
G
1
H
1
A
B
0
C
0
D
1
E
3
F
1
G
1
H
1
A result:
A
B
0
C
D
0
E
3
F
0
G
1
H
1
A C result:
A
B
0
C
D
0
E
2
F
G
0
H
1
A C F result:
Figure 12.8: The input to a topological sort (upper left) and three stages in its
computation. The shaded nodes are those that have been processed and moved to
the result. The starred nodes are the ones in the fringe. Subscripts indicate count
elds. A possible nal sequence of nodes, given this start, is A, C, F, D, B, E, G,
H.
12.3. GRAPH ALGORITHMS 215
FACT: If the vertices of a connected graph G are divided into two disjoint non-
empty sets, V
0
and V
1
, then any MST for G will contain one of the edges running
between a vertex in V
0
and a vertex in V
1
that has minimal weight.
Proof. Its convenient to use a proof by contradiction. Suppose that some MST,
T, doesnt contain any of the edges between V
0
and V
1
with minimal weight. Con-
sider the eect of adding to T an edge from V
0
to V
1
, e, that does have minimal
weight, thus giving T

(there must be such an edge, since otherwise T would be


unconnected). Since T was a tree, the result of adding this new edge must have a
cycle involving e (since it adds a new path between two nodes that already had a
path between them in T). This is only possible if the cycle contains another edge
from T, e

, that also runs between V


0
and V
1
. By hypothesis, e has weight less than
e

. If we remove e

from T

, we get a tree once again, but since we have substituted


e for e

, the sum of the edge weights for this new tree is less than that for T, a con-
tradiction of Ts minimality. Therefore, it was wrong to assume that T contained
no minimal-weight edges from V
0
to V
1
. (End of Proof)
We use this fact by taking V
0
to be a set of processed (marked) vertices for which
we have selected edges that form a tree, and taking V
1
to be the set of all other
vertices. By the Fact above, we may safely add to the tree any minimal-weight edge
from the marked vertices to an unmarked vertex.
This gives what is known as Prims algorithm. This time, we introduce two extra
pieces of information for each node, dist[v] (a weight value), and parent[v] (a
Vertex). At each point in the algorithm, the dist value for an unprocessed vertex
(still in the fringe) is the minimal distance (weight) between it and a processed
vertex, and the parent value is the processed vertex that achieves this minimal
distance.
/** For all vertices v in G, set PARENT[v] to be the parent of v in
* a MST of G. For each v in G, DIST[v] may be altered arbitrarily.
* Assumes that G is connected. WEIGHT[e] is the weight of edge e. */
static void MST(Graph G, int[] weight, int[] parent, int[] dist)
{
for (int v = 0; v < G.numVertices(); v += 1) {
dist[v] = ;
parent[v] = -1;
}
Let r be an arbitrary vertex in G;
dist[r] = 0;
Graph-traversal schema replacement for MST;
}
The appropriate settings for the graph-traversal schema are as follows.
COLLECTION OF VERTICES is a priority queue of vertices ordered by dist val-
ues, with smaller dists having higher priorities.
216 CHAPTER 12. GRAPHS
INITIAL COLLECTION contains all the vertices of G.
REMOVE HIGHEST PRIORITY ITEM removes the rst item in the priority queue.
VISIT(v): for each edge (v, w) with weight n, if w is unmarked, and dist[w] > n,
set dist[w] to n and set parent[w] to v.
NEEDS PROCESSING(v) is always false.
Figure 12.9 illustrates this algorithm in action.
12.3.6 Shortest paths
Suppose that we are given a weighted graph (directed or otherwise) and we want
to nd the shortest paths from some starting node to each other reachable node.
A succinct presentation of the results of this algorithm is known as a shortest-path
tree. This is a (not necessarily minimum) spanning tree for the graph with the
desired starting node as the root such that the path from the root to each other
node in the tree is also a path of minimal total weight in the full graph.
A common algorithm for doing this, known as Dijkstras algorithm, looks almost
identical to Prims algorithm for MSTs. We have the same PARENT and DIST data
as before. However, whereas in Prims algorithm, DIST gives the shortest distance
from an unmarked vertex to the marked vertices, in Dijkstras algorithm it gives
the length of the shortest path known so far that leads to it from the starting node.
/** For all vertices v in G reachable from START, set PARENT[v]
* to be the parent of v in a shortest-path tree from START in G. For
* all vertices in this tree, DIST[v] is set to the distance from START
* WEIGHT[e] are edge weights. Assumes that vertex START is in G. */
static void shortestPaths(Graph G, int start, int[] weight,
int[] parent, int[] dist)
{
for (int v = 0; v < G.numVertices(); v += 1) {
dist[v] = ;
parent[v] = -1;
}
dist[start] = 0;
Graph-traversal schema replacement for shortest-path tree;
}
where we substitute into the schema as follows:
COLLECTION OF VERTICES is a priority queue of vertices ordered by dist val-
ues, with smaller dists having higher priorities.
INITIAL COLLECTION contains all the vertices of G.
REMOVE HIGHEST PRIORITY ITEM removes the rst item in the priority queue.
12.3. GRAPH ALGORITHMS 217
A
0
B

2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E

G
7
H

2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
4
D
3
E
3
F

G
7
H

2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
2
D
3
E
3
F
1
G
7
H
2
2
5
3
7
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
2
D
3
E
3
F
1
G
7
H
2
2
5
3
7
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
2
D
3
E
3
F
1
G
1
H
2
2
5
3
7
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
2
D
3
E
3
F
1
G
1
H
2
2
5
3
7
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
2
D
3
E
3
F
1
G
1
H
2
2
5
3
7
5 3
2 2
3
6
4
2
1
1
Figure 12.9: Prims algorithm for minimum spanning tree. Vertex r is A. The
numbers in the nodes denote dist values. Dashed edges denote parent values;
they form a MST after the last step. Unshaded nodes are in the fringe. The last
two steps (which dont change parent pointers) have been collapsed into one.
218 CHAPTER 12. GRAPHS
MARK and MARKED can be trivial (return false and do nothing, respectively).
VISIT(v): for each edge (v, w) with weight n, if w is unmarked, and dist[w] >
n +dist[v], set dist[w] to n +dist[v] and set parent[w] to v. Reorder fringe
as needed.
NEEDS PROCESSING(v) is always false.
Figure 12.10 illustrates Dijkstras algorithm in action.
Because of their very similar structure, the asymptotic running times of Dijk-
stras and Prims algorithms are similar. We visit each vertex once (removing an
item from the priority queue), and reorder the priority queue at most once for each
edge. Hence, if V is the number of vertices of G and E is the number of edges, we
get an upper bound on the time required by these algorithms of O((V +E) lg V ).
12.3.7 Kruskals algorithm for MST
Just so you dont get the idea that our graph traversal schema is the only possible
way to go, well consider a classical method for forming a minimum spanning tree,
known as Kruskals algorithm. This algorithm relies on a union-nd structure. At
any time, this structure contains a partition of the vertices: a collection of disjoint
sets of vertices that includes all of vertices. Initially, each vertex is alone in its own
set. The idea is that we build up an MST one edge at a time. We repeatedly choose
an edge of minimum weight that joins vertices in two dierent sets, add that edge to
the MST we are building, and then combine (union) the two sets of vertices into one
set. This process continues until all the sets have been combined into one (which
must contain all the vertices). At any point, each set is a bunch of vertices that are
all reachable from each other via the edges so far added to the MST. When there
is only one set, it means that all of the vertices are reachable, and so we have a set
of edges that spans the tree. It follows from the Fact in 12.3.5 that if we always
add the minimally weighted edge that connects two of the disjoint sets of vertices,
that edge can always be part of a MST, so the nal result must also be a MST.
Figure 12.11 illustrates the idea.
For the program, Ill assume we have a typeUnionFindrepresenting sets of
sets of vertices. We need two operations on this type: an inquiry S.sameSet(v,
w) that tells us whether vertices v and w are in the same set in S, and an operation
S.union(v, w) that combines the sets containing vertices v and w into one. Ill
also assume a set of edges set to contain the result.
12.3. GRAPH ALGORITHMS 219
A
0
B

2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E

G
7
H

2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E
5
F

G
7
H

2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E
5
F

G
6
H
9
2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E
5
F
7
G
6
H
9
2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E
5
F
6
G
6
H
7
2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
A
0
B
2
C
5
D
3
E
5
F
6
G
6
H
7
2
5
3
7
4
5 3
2 2
3
6
4
2
1
1
Figure 12.10: Dijkstras algorithm for shortest paths. The starting node is A.
Numbers in nodes represent minimum distance to node A so far found (dist).
Dashed arrows represent parent pointers; their nal values show the shortest-path
tree. The last three steps have been collapsed to one.
220 CHAPTER 12. GRAPHS
A
0
B
1
C
2
D
3
E
4
F
5
G
6
H
7
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
1
C
2
D
3
E
4
F
5
G
6
H
6
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
1
C
2
D
3
E
4
F
4
G
6
H
6
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
0
C
2
D
3
E
4
F
4
G
6
H
6
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
0
C
2
D
3
E
2
F
2
G
6
H
6
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
0
C
2
D
3
E
2
F
2
G
2
H
2
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
0
C
0
D
3
E
0
F
0
G
0
H
0
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
A
0
B
0
C
0
D
0
E
0
F
0
G
0
H
0
1
1
2
2 2
2
3 3
3
4
4
5
5
6
7
Figure 12.11: Kruskals algorithm. The numbers in the vertices denote sets: vertices
with the same number are in the same set. Dashed edges have been added to the
MST. This is dierent from the MST found in Figure 12.9.
12.3. GRAPH ALGORITHMS 221
/** Return a subset of edges of G forming a minimum spanning tree for G.
* G must be a connected undirected graph. WEIGHT gives edge weights. */
EdgeSet MST(Graph G, int[] weight)
{
UnionFind S;
EdgeSet E;
// Initialize S to v [ v is a vertex of G ;
S = new UnionFind(G.numVertices());
E = new EdgeSet();
For each edge (v,w) in G in order of increasing weight {
if (! S.sameSet(v, w)) {
Add (v,w) to E;
S.union(v, w);
}
}
return E;
}
The tricky part is this union-nd bit. From what you know, you might well
guess that each sameSet operation will require time (N lg N) in the worst case
(look in each of up to N sets each of size up to N). Interestingly enough, there is a
better way. Lets assume (as in this problem) that the sets contain integers from 0 to
N 1. At any given time, there will be up to N disjoint sets; well give them names
(well, numbers really) by selecting a single representative member of each set and
using that member (a number between 0 and N 1) to identify the set. Then, if we
can nd the current representative member of the set containing any vertex, we can
tell if two vertices are in the same set by seeing if their representative members are
the same. One way to do this is to represent each disjoint set as a tree of vertices,
but with children pointing at parents (you may recall that I said such a structure
would eventually be useful). The root of each tree is the representative member,
which we may nd by following parent links. For example, we can represent the set
of sets
1, 2, 4, 6, 7, 0, 3, 5, 8, 9, 10
with the forest of trees
1
7 2
4
6
3
0 5
8
10
9
222 CHAPTER 12. GRAPHS
We represent all this with a single integer array, parent, with parent[v] con-
taining the number of parent node of v, or 1 if v has no parent (i.e., is a represen-
tative member). The union operation is quite simple: to compute S.union(v, w),
we nd the roots of the trees containing v and w (by following the parent chain)
and then make one of the two roots the child of the other. So, for example, we
could compute S.union(6, 0) by nding the representative member for 6 (which
is 1), and for 0 (which is 3) and then making 3 point to 1:
1
7 2
4
6
3
0 5
For best results, we should make the tree of lesser rank (roughly, height) point to
the one of larger rank
4
.
However, while were at it, lets throw in a twist. After we traverse the paths
from 6 up to 1 and from 0 to 3, well re-organize the tree by having every node in
those paths point directly at node 1 (in eect memoizing the result of the oper-
ation of nding the representative member). Thus, after nding the representative
member for 6 and 0 and unioning, we will have the following, much atter tree:
1
7 2 4 6 0
3
5
This re-arrangement, which is called path compression, causes subsequent in-
quiries about vertices 6, 4, and 0 to be considerably faster than before. It turns
out that with this trick (and the heuristic of making the shallower tree point at the
deeper in a union), any sequence of M union and sameSet operations on a set of sets
containing a total of N elements can be performed in time O((M, N)M). Here,
(M, N) is an inverse of Ackermans function. Specically, (M, N) is dened as
the minimum i such that A(i, M/N|) > lg N, where
A(1, j) = 2
j
, for j 1,
A(i, 1) = A(i 1, 2), for i 2,
A(i, j) = A(i 1, A(i, j 1)), for i, j 2.
4
Were cheating a bit in this section to make the eects of the optimizations we describe a bit
clearer. We could not have constructed the stringy trees in these examples had we always made
the lesser-rank tree point to the greater. So in eect, our examples start from union-nd trees that
were constructed in haphazard fashion, and we show what happens if we start doing things right
from then on.
12.3. GRAPH ALGORITHMS 223
Well, this is all rather complicated, but suce it to say that A grows monumentally
fast, so that grows with subglacial slowness, and is for all mortal purposes 4. In
short, the amortized cost of M operations (union and sameSets in any combination)
is roughly constant per operation. Thus, the time required for Kruskals algorithm
is dominated by the sorting time for the edges, and is asymptotically O(E lg E), for
E the number of edges. This in turn equals O(E lg V ) for a connected graph, where
V is the number of vertices.
Exercises
12.1. A borogove and a snark nd themselves in a maze of twisty little passages
that connect numerous rooms, one of which is the maze exit. The snark, being a
boojum, nds borogoves especially tasty after a long day of causing people to softly
and silently vanish away. Unfortunately for the snark (and contrariwise for his
prospective snack), borogoves can run twice as fast as snarks and have an uncanny
ability of nding the shortest route to the exit. Fortunately for the snark, his
preternatural senses tell him precisely where the borogove is at any time, and he
knows the maze like the back of his, er, talons. If he can arrive at the exit or in
any of the rooms in the borogoves path before the borogove does (strictly before,
not at the same time), he can catch it. The borogove is not particularly intelligent,
and will always take the shortest path, even if the snark is waiting on it.
Thus, for example, in the following maze, the snark (starting at S) will dine in
the shaded room, which he reaches in 6 time units, and the borogove (starting at
B) in 7. The numbers on the connecting passages indicate distances (the numbers
inside rooms are just labels). The snark travels at 0.5 units/hour, and the borogove
at 1 unit/hour.
S 3
4
5 6 7
B
8 9
E
2
2
4
1
3
6
2 1
2
3
8
1
1
7
Write a program to read in a maze such as the above, and print one of two
messages: Snark eats, or Borogove escapes, as appropriate. Place your answer
in a class Chase (see the templates in cs61b/hw/hw7).
The input is as follows.
A positive integer N 3 indicating the number of rooms. You may assume
that N < 1024. The rooms are assumed to be numbered from 0 to N 1.
224 CHAPTER 12. GRAPHS
Room 0 is always the exit. Initially room 1 contains the borogove and room 2
contains the snark.
A sequence of edges, each consisting of two room numbers (the order of the
room numbers is immaterial) followed by an integer distance.
Assume that whenever the borogove has a choice between passages to take (i.e., all
lead to a shortest path), he chooses the one to the room with the lowest number.
For the maze shown, a possible input is as follows.
10
2 3 2 2 4 2 3 5 4 3 6 1 4 1 3
5 6 6 5 8 2 5 9 1 6 7 2 6 9 3
7 0 1 7 9 8 1 8 1
8 9 7
Index
AbstractCollection class, 5254
AbstractCollection methods
add, 54
iterator, 54
size, 54
toString, 54
AbstractList class, 5357, 62
AbstractList methods
add, 57
get, 57
listIterator, 57, 58
remove, 57
removeRange, 57
set, 57
size, 57
AbstractList.ListIteratorImpl class, 58,
59
AbstractList.modCount eld, 57
AbstractList.modCount elds
modCount, 57
AbstractMap class, 60, 61
AbstractMap methods
clear, 61
containsKey, 61
containsValue, 61
entrySet, 61
equals, 61
get, 61
hashCode, 61
isEmpty, 61
keySet, 61
put, 61
putAll, 61
remove, 61
size, 61
toString, 61
values, 61
AbstractSequentialList class, 5660, 63
AbstractSequentialList methods
listIterator, 60
size, 60
acyclic graph, 201
adapter pattern, 81
add (AbstractCollection), 54
add (AbstractList), 57
add (ArrayList), 68
add (Collection), 32
add (LinkedIter), 77
add (List), 36
add (ListIterator), 27
add (Queue), 82
add (Set), 34
addAll (Collection), 32
addAll (List), 36
addFirst (Deque), 82
additive generator, 193
adjacency list, 203
adjacency matrix, 209
adjacent vertex, 201
AdjGraph class, 206
algorithmic complexity, 720
amortized cost, 1819, 65
ancestor (of tree node), 91
Array class, 52
Array methods
newInstance, 52
ArrayDeque class, 86
ArrayList class, 6569
ArrayList methods
add, 68
check, 68
ensureCapacity, 68
get, 67
remove, 67
225
226 INDEX
removeRange, 68
set, 67
size, 67
ArrayStack class, 83
asymptotic complexity, 911
average time, 8
AVL tree, 178180
B-tree, 161163
backtracking, 78
biconnected graph, 202
Big-Oh notation
denition, 9
Big-Omega notation
denition, 11
Big-Theta notation
denition, 11
bin, 129
binary search tree (BST), 107
binary tree, 92, 93
binary-search-tree property, 107
BinaryTree, 95
BinaryTree methods
left, 95
right, 95
setLeft, 95
setRight, 95
binomial comb, 150
breadth-rst traversal, 100, 212
BST, see binary search tree
deleting from, 111
searching, 109
BST class, 110
BST methods
nd, 109
insert, 111, 113
remove, 112
swapSmallest, 112
BSTSet class, 115, 116
call stack, 81
chained hash tables, 129
check (ArrayList), 68
child (Tree), 95
children (in tree), 91
circular buer, 84
Class class, 52
Class methods
getComponentType, 52
clear (AbstractMap), 61
clear (Collection), 32
clear (Map), 43
clone (LinkedList), 75
codomain, 39
Collection class, 31, 32
Collection hierarchy, 29
Collection interface, 2630
Collection methods
add, 32
addAll, 32
clear, 32
contains, 31
containsAll, 31
isEmpty, 31
iterator, 31
remove, 32
removeAll, 32
retainAll, 32
size, 31
toArray, 31
Collections class, 158, 186, 200
Collections methods
shue, 200
sort, 158
synchronizedList, 186
collision (in hash table), 130
Comparable class, 38
Comparable methods
compareTo, 38
comparator (SortedMap), 44
comparator (SortedSet), 40
Comparator class, 39
Comparator methods
compare, 39
equals, 39
compare (Comparator), 39
compareTo (Comparable), 38
complete tree, 92, 93
complexity, 720
compressing tables, 175
concurrency, 185189
INDEX 227
ConcurrentModicationException class,
56
connected component, 201
connected graph, 201
consistency with .equals, 38
contains (Collection), 31
containsAll (Collection), 31
containsKey (AbstractMap), 61
containsValue (AbstractMap), 61
cycle in a graph, 201
deadlock, 189
degree (Tree), 95
degree of a vertex, 201
degree of node, 91
deleting from a BST, 111
depth of tree node, 92
depth-rst traversal, 212
Deque class, 82
deque data structure, 74
Deque methods
addFirst, 82
last, 82
removeLast, 82
descendent (of tree node), 91
design pattern
adapter, 81
denition, 49
Singleton, 100
Template Method, 49
Visitor, 102
digraph, see directed graph
Digraph class, 204
Dijkstras algorithm, 216
directed graph, 201
distribution counting sort, 144
domain, 39
double hashing, 134
double linking, 7073
double-ended queue, 74
edge, 91
edge, in a graph, 201
edge-set graph representation, 208
enhanced for loop, 25
ensureCapacity (ArrayList), 68
Entry class, 75
entrySet (AbstractMap), 61
entrySet (Map), 42
Enumeration class, 24
.equals, consistent with, 38
equals (AbstractMap), 61
equals (Comparator), 39
equals (Map), 42
equals (Map.Entry), 43
equals (Set), 34
expression tree, 93
external node, 91
external path length, 92
external sorting, 138
FIFO queue, 74
nd (BST), 109
ndExit procedure, 80
rst (PriorityQueue), 121
rst (Queue), 82
rst (SortedSet), 40
rstKey (SortedMap), 44
for loop, enhanced, 25
forest, 92
free tree, 202
full tree, 92, 93
Gamma, Erich, 49
get (AbstractList), 57
get (AbstractMap), 61
get (ArrayList), 67
get (HashMap), 132
get (List), 35
get (Map), 42
getClass (Object), 52
getComponentType (Class), 52
getKey (Map.Entry), 43
getValue (Map.Entry), 43
graph
acyclic, 201
biconnected, 202
breadth-rst traversal, 212
connected, 201
depth-rst traversal, 212
directed, 201
path, 201
228 INDEX
traversal, general, 211
undirected, 201
Graph class, 205
graphs, 201223
hashCode (AbstractMap), 61
hashCode (Map), 42
hashCode (Map.Entry), 43
hashCode (Object), 130, 135
hashCode (Set), 34
hashCode (String), 136
hashing, 129136
hashing function, 129, 134136
HashMap class, 132
HashMap methods
get, 132
put, 132
hasNext (LinkedIter), 76
hasNext (ListIterator), 27
hasPrevious (LinkedIter), 76
hasPrevious (ListIterator), 27
headMap (SortedMap), 44
headSet (SortedSet), 40
heap, 119127
height of tree, 92
Helm, Richard, 49
image, 39
in-degree, 201
incident edge, 201
indexOf (List), 35
indexOf (Queue), 82
indexOf (Stack), 79
indexOf (StackAdapter), 83
inorder traversal, 100
insert (BST), 111, 113
insert (PriorityQueue), 121
insertion sort, 139
insertionSort, 140
internal node, 91
internal path length, 92
internal sorting, 138
InterruptedException class, 188
inversion, 138
isEmpty (AbstractMap), 61
isEmpty (Collection), 31
isEmpty (Map), 42
isEmpty (PriorityQueue), 121
isEmpty (Queue), 82
isEmpty (Stack), 79
isEmpty (StackAdapter), 83
Iterable class, 25
Iterable methods
iterator, 25
iterator, 24
iterator (AbstractCollection), 54
iterator (Collection), 31
iterator (Iterable), 25
iterator (List), 35
Iterator interface, 2426
java.lang classes
Class, 52
Comparable, 38
InterruptedException, 188
Iterable, 25
java.lang.reect classes
Array, 52
java.util classes
AbstractCollection, 5254
AbstractList, 5357, 62
AbstractList.ListIteratorImpl, 58,
59
AbstractMap, 60, 61
AbstractSequentialList, 5660, 63
ArrayList, 6569
Collection, 31, 32
Collections, 158, 186, 200
Comparator, 39
ConcurrentModicationException,
56
Enumeration, 24
HashMap, 132
LinkedList, 75
List, 33, 35, 36
ListIterator, 27
Map, 4143
Map.Entry, 43
Random, 193, 195
Set, 3334
SortedMap, 41, 44
INDEX 229
SortedSet, 39, 40, 113115
Stack, 78
UnsupportedOperationException,
30
java.util interfaces
Collection, 2630
Iterator, 2426
ListIterator, 26
java.util.LinkedList classes
Entry, 75
LinkedIter, 75, 76
Johnson, Ralph, 49
key, 107
key, in sorting, 137
keySet (AbstractMap), 61
keySet (Map), 42
Kruskals algorithm, 218
label (Tree), 95
last (Deque), 82
last (SortedSet), 40
lastIndexOf (List), 35
lastIndexOf (Queue), 82
lastKey (SortedMap), 44
leaf node, 91
left (BinaryTree), 95
level of tree node, 92
LIFO queue, 74
linear congruential generator, 191193
linear probes, 130
link, 69
linked structure, 69
LinkedIter class, 75, 76
LinkedIter methods
add, 77
hasNext, 76
hasPrevious, 76
next, 76
nextIndex, 77
previous, 76
previousIndex, 77
remove, 77
set, 77
LinkedList class, 75
LinkedList methods
clone, 75
listIterator, 75
List class, 33, 35, 36
List methods
add, 36
addAll, 36
get, 35
indexOf, 35
iterator, 35
lastIndexOf, 35
listIterator, 35
remove, 36
set, 36
subList, 35
listIterator (AbstractList), 57, 58
listIterator (AbstractSequentialList), 60
listIterator (LinkedList), 75
listIterator (List), 35
ListIterator class, 27
ListIterator interface, 26
ListIterator methods
add, 27
hasNext, 27
hasPrevious, 27
next, 27
nextIndex, 27
previous, 27
previousIndex, 27
remove, 27
set, 27
Little-oh notation
denition, 11
Lomuto, Nico, 148
LSD-rst radix sorting, 155
Map class, 4143
Map hierarchy, 28
Map methods
clear, 43
entrySet, 42
equals, 42
get, 42
hashCode, 42
isEmpty, 42
keySet, 42
230 INDEX
put, 43
putAll, 43
remove, 43
size, 42
values, 42
Map.Entry class, 43
Map.Entry methods
equals, 43
getKey, 43
getValue, 43
hashCode, 43
setValue, 43
mapping, 39
marking vertices, 210
merge sorting, 149
message-passing, 189
minimum spanning tree, 213, 218
mod, 131
modCount (eld), 57
monitor, 187189
MSD-rst radix sorting, 155
mutual exclusion, 186
natural ordering, 38
newInstance (Array), 52
next (LinkedIter), 76
next (ListIterator), 27
nextIndex (LinkedIter), 77
nextIndex (ListIterator), 27
nextInt (Random), 195
node of tree, 91
node, in a graph, 201
non-terminal node, 91
null tree representation, 99
numChildren (Tree), 95
O(), see Big-Oh notation
o(), see Little-oh notation
Object methods
getClass, 52
hashCode, 130, 135
(), see Big-Omega notation
open-address hash table, 130134
order notation, 911
ordered tree, 91, 92
ordering, natural, 38
ordering, total, 38
orthogonal range query, 115
out-degree, 201
parent (Tree), 96
partitioning (for quicksort), 148
path compression, 222
path in a graph, 201
path length in tree, 92
performance
of ArrayList, 69
of AbstractList, 62
of AbstractSequentialList, 63
point quadtree, 119
point-region quadtree, 119
pop (Stack), 79
pop (StackAdapter), 83
positional tree, 92
postorder traversal, 100
PR quadtree, 119
preorder traversal, 100
previous (LinkedIter), 76
previous (ListIterator), 27
previousIndex (LinkedIter), 77
previousIndex (ListIterator), 27
Prims algorithm, 215
primary key, 137
priority queue, 119127
PriorityQueue class, 121
PriorityQueue methods
rst, 121
insert, 121
isEmpty, 121
removeFirst, 121
proper ancestor, 91
proper descendent, 91
proper subtree, 91
protected constructor, use of, 53
protected methods, use of, 53
pseudo-random number generators, 191
200
additive, 193
arbitrary ranges, 194
linear congruential, 191193
non-uniform, 195198
INDEX 231
push (Stack), 79
push (StackAdapter), 83
put (AbstractMap), 61
put (HashMap), 132
put (Map), 43
putAll (AbstractMap), 61
putAll (Map), 43
quadtree, 119
Queue class, 82
queue data type, 74
Queue methods
add, 82
rst, 82
indexOf, 82
isEmpty, 82
lastIndexOf, 82
removeFirst, 82
size, 82
quicksort, 147
radix sorting, 154
random number generation, see pseudo-
random number generators
random access, 53
Random class, 193, 195
Random methods
nextInt, 195
random sequences, 199
range, 39
range queries, 113
range query, orthogonal, 115
reachable vertex, 201
record, in sorting, 137
recursion
and stacks, 78
reection, 52
reexive edge, 201
region quadtree, 119
remove (AbstractList), 57
remove (AbstractMap), 61
remove (ArrayList), 67
remove (BST), 112
remove (Collection), 32
remove (LinkedIter), 77
remove (List), 36
remove (ListIterator), 27
remove (Map), 43
removeAll (Collection), 32
removeFirst (PriorityQueue), 121
removeFirst (Queue), 82
removeLast (Deque), 82
removeRange (AbstractList), 57
removeRange (ArrayList), 68
removing from a BST, 111
retainAll (Collection), 32
right (BinaryTree), 95
root node, 91
rooted tree, 91
rotation of a tree, 175
searching a BST, 109
secondary key, 137
selection, 158159
selection sort, 144
sentinel node, 70, 72
set (AbstractList), 57
set (ArrayList), 67
set (LinkedIter), 77
set (List), 36
set (ListIterator), 27
Set class, 3334
Set methods
add, 34
equals, 34
hashCode, 34
setChild (Tree), 95
setLeft (BinaryTree), 95
setParent (Tree), 96
setRight (BinaryTree), 95
setValue (Map.Entry), 43
Shells sort (shellsort), 139
shortest paths, 216
shue (Collections), 200
single linking, 70
Singleton pattern, 100
size (AbstractCollection), 54
size (AbstractList), 57
size (AbstractMap), 61
size (AbstractSequentialList), 60
size (ArrayList), 67
232 INDEX
size (Collection), 31
size (Map), 42
size (Queue), 82
size (Stack), 79
size (StackAdapter), 83
skip list, 180183
sort (Collections), 158
SortedMap class, 41, 44
SortedMap methods
comparator, 44
rstKey, 44
headMap, 44
lastKey, 44
subMap, 44
tailMap, 44
SortedSet class, 39, 40, 113115
SortedSet methods
comparator, 40
rst, 40
headSet, 40
last, 40
subSet, 40
tailSet, 40
sorting, 137158
distribution counting, 144
exchange, 147
insertion, 139
merge, 149
quicksort, 147
radix, 154
Shells, 139
straight selection, 144
sparse arrays, 175
stable sort, 137
stack
and recursion, 78
Stack class, 78, 79, 81
stack data type, 74
Stack methods
indexOf, 79
isEmpty, 79
pop, 79
push, 79
size, 79
top, 79
StackAdapter class, 81, 83
StackAdapter methods
indexOf, 83
isEmpty, 83
pop, 83
push, 83
size, 83
top, 83
straight insertion sort, 139
straight selection sort, 144
String methods
hashCode, 136
structural modication, 37
subgraph, 201
subList (List), 35
subMap (SortedMap), 44
subSet (SortedSet), 40
subtree, 91
swapSmallest (BST), 112
symmetric-order traversal, 100
synchronization, 185189
synchronized keyword, 186
synchronizedList (Collections), 186
tailMap (SortedMap), 44
tailSet (SortedSet), 40
Template Method pattern, 49
terminal node, 91
(), see Big-Theta notation
thread-safe, 186
threads, 185189
toArray (Collection), 31
top (Stack), 79
top (StackAdapter), 83
topological sorting, 212
toString (AbstractCollection), 54
toString (AbstractMap), 61
total ordering, 38
traversal of tree, 100101
traversing an edge, 91
Tree, 95
tree, 91105
array representation, 9899
balanced, 161
binary, 92, 93
INDEX 233
complete, 92, 93
edge, 91
free, 202
full, 92, 93
height, 92
leaf-up representation, 97
node, 91
ordered, 91, 92
positional, 92
root, 91
root-down representation, 96
rooted, 91
rotation, 175
traversal, 100101
Tree methods
child, 95
degree, 95
label, 95
numChildren, 95
parent, 96
setChild, 95
setParent, 96
tree node, 91
Trie, 163175
ucb.util classes
AdjGraph, 206
ArrayDeque, 86
ArrayStack, 83
BSTSet, 116
Deque, 82
Digraph, 204
Graph, 205
Queue, 82
Stack, 79, 81
StackAdapter, 81, 83
unbalanced search tree, 113
undirected graph, 201
union-nd algorithm, 221223
UnsupportedOperationException class,
30
values (AbstractMap), 61
values (Map), 42
vertex, in a graph, 201
views, 33
visiting a node, 100
Visitor pattern, 102
Vlissides, John, 49
worst-case time, 7

You might also like