Paper 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Published as a conference paper at ICLR 2018

U NDERSTANDING D EEP N EURAL N ETWORKS WITH


R ECTIFIED L INEAR U NITS
Raman Arora∗ Amitabh Basu† Poorya Mianjy‡ Anirbit Mukherjee§
Johns Hopkins University

A BSTRACT
arXiv:1611.01491v6 [cs.LG] 28 Feb 2018

In this paper we investigate the family of functions representable by deep neural


networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a
ReLU DNN with one hidden layer to global optimality with runtime polynomial in
the data size albeit exponential in the input dimension. Further, we improve on the
known lower bounds on size (from exponential to super exponential) for approx-
imating a ReLU deep net function by a shallower ReLU net. Our gap theorems
hold for smoothly parametrized families of “hard” functions, contrary to count-
able, discrete families known in the literature. An example consequence of our
gap theorems is the following: for every natural number k there exists a function
representable by a ReLU DNN with k 2 hidden layers and total size k 3 , such that
any ReLU DNN with at most k hidden layers will require at least 21 k k+1 − 1 total
nodes. Finally, for the family of Rn → R DNNs with ReLU activations, we show
a new lowerbound on the number of affine pieces, which is larger than previous
constructions in certain regimes of the network architecture and most distinctively
our lowerbound is demonstrated by an explicit construction of a smoothly param-
eterized family of functions attaining this scaling. Our construction utilizes the
theory of zonotopes from polyhedral theory.

1 I NTRODUCTION
Deep neural networks (DNNs) provide an excellent family of hypotheses for machine learning tasks
such as classification. Neural networks with a single hidden layer of finite size can represent any
continuous function on a compact subset of Rn arbitrary well. The universal approximation result
was first given by Cybenko in 1989 for sigmoidal activation function (Cybenko, 1989), and later
generalized by Hornik to an arbitrary bounded and nonconstant activation function Hornik (1991).
Furthermore, neural networks have finite VC dimension (depending polynomially on the number of
edges in the network), and therefore, are PAC (probably approximately correct) learnable using a
sample of size that is polynomial in the size of the networks Anthony & Bartlett (1999). However,
neural networks based methods were shown to be computationally hard to learn (Anthony & Bartlett,
1999) and had mixed empirical success. Consequently, DNNs fell out of favor by late 90s.
Recently, there has been a resurgence of DNNs with the advent of deep learning LeCun et al. (2015).
Deep learning, loosely speaking, refers to a suite of computational techniques that have been devel-
oped recently for training DNNs. It started with the work of Hinton et al. (2006), which gave empir-
ical evidence that if DNNs are initialized properly (for instance, using unsupervised pre-training),
then we can find good solutions in a reasonable amount of runtime. This work was soon followed by
a series of early successes of deep learning at significantly improving the state-of-the-art in speech
recognition Hinton et al. (2012). Since then, deep learning has received immense attention from
the machine learning community with several state-of-the-art AI systems in speech recognition, im-
age classification, and natural language processing based on deep neural nets Hinton et al. (2012);
Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014). While there is less
of evidence now that pre-training actually helps, several other solutions have since been put forth

Department of Computer Science, Email: [email protected]

Department of Applied Mathematics and Statistics, Email: [email protected]

Department of Computer Science, Email: [email protected]
§
Department of Applied Mathematics and Statistics, Email: [email protected]

1
Published as a conference paper at ICLR 2018

to address the issue of efficiently training DNNs. These include heuristics such as dropouts Sri-
vastava et al. (2014), but also considering alternate deep architectures such as convolutional neural
networks Sermanet et al. (2014), deep belief networks Hinton et al. (2006), and deep Boltzmann ma-
chines Salakhutdinov & Hinton (2009). In addition, deep architectures based on new non-saturating
activation functions have been suggested to be more effectively trainable – the most successful and
widely popular of these is the rectified linear unit (ReLU) activation, i.e., σ(x) = max{0, x}, which
is the focus of study in this paper.
In this paper, we formally study deep neural networks with rectified linear units; we refer to these
deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand
the reason behind the successes of deep learning, both in terms of the structure of the functions
represented by DNNs, Telgarsky (2015; 2016); Kane & Williams (2015); Shamir (2016), as well
as efforts which have tried to understand the non-convex nature of the training problem of DNNs
better Kawaguchi (2016); Haeffele & Vidal (2015). Our investigation of the function space repre-
sented by ReLU DNNs also takes inspiration from the classical theory of circuit complexity; we
refer the reader to Arora & Barak (2009); Shpilka & Yehudayoff (2010); Jukna (2012); Saptharishi
(2014); Allender (1998) for various surveys of this deep and fascinating field. In particular, our gap
results are inspired by results like the ones by Hastad Hastad (1986), Razborov Razborov (1987)
and Smolensky Smolensky (1987) which show a strict separation of complexity classes. We make
progress towards similar statements with deep neural nets with ReLU activation.

1.1 N OTATION AND D EFINITIONS

We extend the ReLU activation function to vectors x ∈ Rn through entry-wise operation: σ(x) =
(max{0, x1 }, max{0, x2 }, . . . , max{0, xn }). For any (m, n) ∈ N, let Anm and Lnm denote the class
of affine and linear transformations from Rm → Rn , respectively.
Definition 1. [ReLU DNNs, depth, width, size] For any number of hidden layers k ∈ N, input and
output dimensions w0 , wk+1 ∈ N, a Rw0 → Rwk+1 ReLU DNN is given by specifying a sequence
of k natural numbers w1 , w2 , . . . , wk representing widths of the hidden layers, a set of k affine
transformations Ti : Rwi−1 → Rwi for i = 1, . . . , k and a linear transformation Tk+1 : Rwk →
Rwk+1 corresponding to weights of the hidden layers. Such a ReLU DNN is called a (k + 1)-layer
ReLU DNN, and is said to have k hidden layers. The function f : Rn1 → Rn2 computed or
represented by this ReLU DNN is
f = Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ T2 ◦ σ ◦ T1 , (1.1)
where ◦ denotes function composition. The depth of a ReLU DNN is defined as k + 1. The width
of a ReLU DNN is max{w1 , . . . , wk }. The size of the ReLU DNN is w1 + w2 + . . . + wk .
Definition 2. We denote the class of Rw0 → Rwk+1 ReLU DNNs with k hidden layers of widths
{wi }ki=1 by F{wi }k+1 , i.e.
i=0

F{wi }k+1 := {Tk+1 ◦ σ ◦ Tk ◦ · · · ◦ σ ◦ T1 : Ti ∈ Aw


wi−1 ∀i ∈ {1, . . . , k},
i
Tk+1 ∈ Lw
wk } (1.2)
k+1
i=0

Definition 3. [Piecewise linear functions] We say a function f : Rn → R is continuous piecewise


linear (PWL) if there exists a finite set of polyhedra whose union is Rn , and f is affine linear over
each polyhedron (note that the definition automatically implies continuity of the function because
the affine regions are closed and cover Rn , and affine functions are continuous). The number of
pieces of f is the number of maximal connected subsets of Rn over which f is affine linear (which
is finite).
Many of our important statements will be phrased in terms of the following simplex.
Definition 4. Let M > 0 be any positive real number and p ≥ 1 be any natural number. Define the
following set:
∆pM := {x ∈ Rp : 0 < x1 < x2 < . . . < xp < M }.

2 E XACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY


R E LU DNN S
One of the main advantages of DNNs is that they can represent a large family of functions with
a relatively small number of parameters. In this section, we give an exact characterization of the

2
Published as a conference paper at ICLR 2018

functions representable by ReLU DNNs. Moreover, we show how structural properties of ReLU
DNNs, specifically their depth and width, affects their expressive power. It is clear from definition
that any function from Rn → R represented by a ReLU DNN is a continuous piecewise linear
(PWL) function. In what follows, we show that the converse is also true, that is any PWL function
is representable by a ReLU DNN. In particular, the following theorem establishes a one-to-one
correspondence between the class of ReLU DNNs and PWL functions.
Theorem 2.1. Every Rn → R ReLU DNN represents a piecewise linear function, and every piece-
wise linear function Rn → R can be represented by a ReLU DNN with at most dlog2 (n + 1)e + 1
depth.

Proof Sketch: It is clear that any function represented by a ReLU DNN is a PWL function. To
see the converse, we first note that any PWL function can be represented as a linear combination of
piecewise linear convex functions. More formally, by Theorem 1 in (Wang & Sun, 2005), for every
piecewise linear function f : Rn → R, there exists a finite set of affine linear functions `1 , . . . , `k
and subsets S1 , . . . , Sp ⊆ {1, . . . , k} (not necessarily disjoint) where each Si is of cardinality at
most n + 1, such that
Xp  
f= sj max `i , (2.1)
i∈Sj
j=1

where sj ∈ {−1, +1} for all j = 1, . . . , p. Since a function of the form maxi∈Sj `i is a piecewise
linear convex function with at most n + 1 pieces (because |Sj | ≤ n + 1), Equation (2.1) says
that any continuous piecewise linear function (not necessarily convex) can be obtained as a linear
combination of piecewise linear convex functions each of which has at most n + 1 affine pieces.
Furthermore, Lemmas D.1, D.2 and D.3 in the Appendix (see supplementary material), show that
composition, addition, and pointwise maximum of PWL functions are also representable by ReLU
|x−y|
DNNs. In particular, in Lemma D.3 we note that max{x, y} = x+y 2 + 2 is implementable by a
two layer ReLU network and use this construction in an inductive manner to show that maximum of
n + 1 numbers can be computed using a ReLU DNN with depth at most dlog2 (n + 1)e.
While Theorem 2.1 gives an upper bound on the depth of the networks needed to represent all
continuous piecewise linear functions on Rn , it does not give any tight bounds on the size of the
networks that are needed to represent a given piecewise linear function. For n = 1, we give tight
bounds on size as follows:
Theorem 2.2. Given any piecewise linear function R → R with p pieces there exists a 2-layer DNN
with at most p nodes that can represent f . Moreover, any 2-layer DNN that represents f has size at
least p − 1.

Finally, the main result of this section follows from Theorem 2.1, and well-known facts that the
piecewise linear functions are dense in the family of compactly supported continuous functions and
the family of compactly supported continuous functions are dense in Lq (Rn ) (RoydenR & Fitzpatrick,
2010)). Recall that Lq (Rn ) is the space of Lebesgue integrable functions f such that |f |q dµ < ∞,
where µ is the Lebesgue measure on Rn (see Royden Royden & Fitzpatrick (2010)).
Theorem 2.3. Every function in Lq (Rn ), (1 ≤ q ≤ ∞)Rcan be arbitrarily well-approximated in the
Lq norm (which for a function f is given by ||f ||q = ( |f |q )1/q ) by a ReLU DNN function with
at most dlog2 (n + 1)e hidden layers. Moreover, for n = 1, any such Lq function can be arbitrarily
well-approximated by a 2-layer DNN, with tight bounds on the size of such a DNN in terms of the
approximation.

Proofs of Theorems 2.2 and 2.3 are provided in Appendix A. We would like to remark that a weaker
version of Theorem 2.1 was observed in (Goodfellow et al., 2013, Proposition 4.1) (with no bound
on the depth), along with a universal approximation theorem (Goodfellow et al., 2013, Theorem
4.3) similar to Theorem 2.3. The authors of Goodfellow et al. (2013) also used a previous result of
Wang (Wang, 2004) for obtaining their result. In a subsequent work Boris Hanin (Hanin, 2017) has,
among other things, found a width and depth upper bound for ReLU net representation of positive
PWL functions on [0, 1]n . The width upperbound is n+3 for general positive PWL functions and
n + 1 for convex positive PWL functions. For convex positive PWL functions his depth upper
bound is sharp if we disallow dead ReLUs.

3
Published as a conference paper at ICLR 2018

3 B ENEFITS OF D EPTH
Success of deep learning has been largely attributed to the depth of the networks, i.e. number
of successive affine transformations followed by nonlinearities, which is shown to be extracting
hierarchical features from the data. In contrast, traditional machine learning frameworks including
support vector machines, generalized linear models, and kernel machines can be seen as instances of
shallow networks, where a linear transformation acts on a single layer of nonlinear feature extraction.
In this section, we explore the importance of depth in ReLU DNNs. In particular, in Section 3.1, we
provide a smoothly parametrized family of R → R “hard” functions representable by ReLU DNNs,
which requires exponentially larger size for a shallower network. Furthermore, in Section 3.2, we
construct a continuum of Rn → R “hard” functions representable by ReLU DNNs, which to the
best of our knowledge is the first explicit construction of ReLU DNN functions whose number of
affine pieces grows exponentially with input dimension. The proofs of the theorems in this section
are provided in Appendix B.

3.1 C IRCUIT LOWER BOUNDS FOR R → R R E LU DNN S

In this section, we are only concerned about R → R ReLU DNNs, i.e. both input and output
dimensions are equal to one. The following theorem shows the depth-size trade-off in this setting.
Theorem 3.1. For every pair of natural numbers k ≥ 1, w ≥ 2, there exists a family of hard
functions representable by a R → R (k + 1)-layer ReLU DNN of width w such that if it is also
representable by a (k 0 + 1)-layer ReLU DNN for any k 0 ≤ k, then this (k 0 + 1)-layer ReLU DNN
k
has size at least 12 k 0 w k0 − 1.

In fact our family of hard functions described above has a very intricate structure as stated below.
Theorem 3.2. For every k ≥ 1, w ≥ 2, every member of the family of hard functions in Theorem 3.1
has wk pieces and this family can be parametrized by
[
(∆w−1
M × ∆w−1
M × . . . × ∆w−1
M ), (3.1)
M >0
| {z }
k times

i.e., for every point in the set above, there exists a distinct function with the stated properties.

The following is an immediate corollary of Theorem 3.1 by choosing the parameters carefully.
Corollary 3.3. For every k ∈ N and  > 0, there is a family of functions defined on the real line
such that every function f from this family can be represented by a (k 1+ ) + 1-layer DNN with size

k 2+ and if f is represented by a k +1-layer DNN, then this DNN must have size at least 21 k ·k k −1.
2+
k −1
Moreover, this family can be parametrized as, ∪M >0 ∆M .

A particularly illuminative special case is obtained by setting  = 1 in Corollary 3.3:


Corollary 3.4. For every natural number k ∈ N, there is a family of functions parameterized by the
3
set ∪M >0 ∆kM −1 such that any f from this family can be represented by a k 2 + 1-layer DNN with
k 3 nodes, and every k + 1-layer DNN that represents f needs at least 12 k k+1 − 1 nodes.

We can also get hardness of approximation versions of Theorem 3.1 and Corollaries 3.3 and 3.4,
with the same gaps (upto constant terms), using the following theorem.
Theorem 3.5. For every k ≥ 1, w ≥ 2, there exists a function fk,w that can be represented by
a (k + 1)-layer ReLU DNN with w nodes in each layer, such that for all δ > 0 and k 0 ≤ k the
following holds:
Z 1
inf |fk,w (x) − g(x)|dx > δ,
g∈Gk0 ,δ x=0
where Gk0 ,δ is the family of functions representable by ReLU DNNs with depth at most k 0 + 1, and
k/k0 0
(1−4δ)1/k
size at most k 0 w 21+1/k0
.

The depth-size trade-off results in Theorems 3.1, and 3.5 extend and improve Telgarsky’s theorems
from (Telgarsky, 2015; 2016) in the following three ways:

4
Published as a conference paper at ICLR 2018

(i) If we use our Theorem 3.5 to the pair of neural nets considered by Telgarsky in Theorem
1.1 in Telgarsky (2016) which are at depths k 3 (of size also scaling as k 3 ) and k then
for this purpose of approximation in the `1 −norm we would get a size lower bound for
2
the shallower net which scales as Ω(2k ) which is exponentially (in depth) larger than the
k
lower bound of Ω(2 ) that Telgarsky can get for this scenario.
(ii) Telgarsky’s family of hard functions is parameterized by a single natural number k. In
contrast, we show that for every pair of natural numbers w and k, and a point from the set
in equation 3.1, there exists a “hard” function which to be represented by a depth k 0 network
k
would need a size of at least w k0 k 0 . With the extra flexibility of choosing the parameter w,
for the purpose of showing gaps in representation ability of deep nets we can shows size
lower bounds which are super-exponential in depth as explained in Corollaries 3.3 and 3.4.
(iii) A characteristic feature of the “hard” functions in Boolean circuit complexity is that they
are usually a countable family of functions and not a “smooth” family of hard functions.
In fact, in the last section of Telgarsky (2015), Telgarsky states this as a “weakness” of the
state-of-the-art results on “hard” functions for both Boolean circuit complexity and neural
nets research. In contrast, we provide a smoothly parameterized family of “hard” functions
in Section 3.1 (parametrized by the set in equation 3.1). Such a continuum of hard functions
wasn’t demonstrated before this work.

We point out that Telgarsky’s results in (Telgarsky, 2016) apply to deep neural nets with a host of
different activation functions, whereas, our results are specifically for neural nets with rectified linear
units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in
this paper, but with weaker gap guarantees. Eldan-Shamir (Shamir, 2016; Eldan & Shamir, 2016)
show that there exists an Rn → R function that can be represented by a 3-layer DNN, that takes
exponential in n number of nodes to be approximated to within some constant by a 2-layer DNN.
While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting
open question to extend their results to a constant depth hierarchy statement analogous to the recent
result of Rossman et al (Rossman et al., 2015). We also note that in last few years, there has been
much effort in the community to show size lowerbounds on ReLU DNNs trying to approximate
various classes of functions which are themselves not necessarily exactly representable by ReLU
DNNs (Yarotsky, 2016; Liang & Srikant, 2016; Safran & Shamir, 2017).

3.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2

One measure of complexity of a family of Rn → R “hard” functions represented by ReLU DNNs


is the asymptotics of the number of pieces as a function of dimension n, depth k + 1 and size s
of the ReLU DNNs. More precisely, suppose one has a family H of functions such that for every
n, k, w ∈ N the family contains at least one Rn → R function representable by a ReLU DNN with
depth at most k + 1 and maximum width at most w. The following definition formalizes a notion of
complexity for such a H.
Definition 5 (compH (n, k, w)). The measure compH (n, k, w) is defined as the maximum number
of pieces (see Definition 3) of a Rn → R function from H that can be represented by a ReLU DNN
with depth at most k + 1 and maximum width at most w.

Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al.
(2013); Raghu et al. (2016). The best known families H are the ones from Theorem 4 of (Mont-
ufar et al., 2014) and a mild generalization of Theorem 1.1 of (Telgarsky, 2016) to k layers
 (k−1)n
Pn
w
( j=0 wj )and

of ReLU activations with width w; these constructions achieve b( n )c
compH (n, k, s) = O(wk ), respectively. At the end of this section we would explain the precise
sense in which we improve on these numbers. An analysis of this complexity measure is done using
integer programming techniques in (Serra et al., 2017).
Definition 6. Let b1 , . . . , bm ∈ Rn . The zonotope formed by b1 , . . . , bm ∈ Rn is defined as

Z(b1 , . . . , bm ) := {λ1 b1 + . . . + λm bm : −1 ≤ λi ≤ 1, i = 1, . . . , m}.

5
Published as a conference paper at ICLR 2018

(a) H 1 , 1 ◦ N`1 (b) H 1 , 1 ◦ γZ(b1 ,b2 ,b3 ,b4 ) (c) H 1 , 1 , 1 ◦ γZ(b1 ,b2 ,b3 ,b4 )
2 2 2 2 2 2 2

Figure 1: We fix the a vectors for a two hidden layer R → R hard function as a1 = a2 = ( 12 ) ∈ ∆11
Left: A specific hard function induced by `1 norm: ZONOTOPE22,2,2 [a1 , a2 , b1 , b2 ] where
b1 = (0, 1) and b2 = (1, 0). Note that in this case the function can be seen as a composi-
tion of Ha1 ,a2 with `1 -norm N`1 (x) := kxk1 = γZ((0,1),(1,0)) . Middle: A typical hard function
ZONOTOPE22,2,4 [a1 , a2 , c1 , c2 , c3 , c4 ] with generators c1 = ( 41 , 12 ), c2 = (− 12 , 0), c3 = (0, − 14 )
and c4 = (− 14 , − 14 ). Note how increasing the number of zonotope generators makes the function
more complex. Right: A harder function from ZONOTOPE23,2,4 family with the same set of gen-
erators c1 , c2 , c3 , c4 but one more hidden layer (k = 3). Note how increasing the depth make the
function more complex. (For illustrative purposes we plot only the part of the function which lies
above zero.)

The set of vertices of Z(b1 , . . . , bm ) will be denoted by vert(Z(b1 , . . . , bm )). The support func-
tion γZ(b1 ,...,bm ) : Rn → R associated with the zonotope Z(b1 , . . . , bm ) is defined as
γZ(b1 ,...,bm ) (r) = max hr, xi.
x∈Z(b1 ,...,bm )
The following results are well-known in the theory of zonotopes (Ziegler, 1995).
Theorem 3.6. The following are all true.
Pn−1
1. | vert(Z(b1 , . . . , bm ))| ≤ i=0 m−1 . The set of (b1 , . . . , bm ) ∈ Rn × . . . × Rn such

i
that this does not hold at equality is a 0 measure set.
2. γZ(b1 ,...,bm ) (r) = maxx∈Z(b1 ,...,bm ) hr, xi = maxx∈vert(Z(b1 ,...,bm )) hr, xi, and
γZ(b1 ,...,bm ) is therefore a piecewise linear function with | vert(Z(b1 , . . . , bm ))| pieces.

3. γZ(b1 ,...,bm ) (r) = |hr, b1 i| + . . . + |hr, bm i|.


Definition 7 (extremal zonotope set). The set S(n, m) will denote the set of (b1 , . . . , bm ) ∈ Rn ×
1 m n−1 m−1
. . . × Rn such that | vert(Z(b , . . . , b ))| = i=0
P 
i . S(n, m) is the so-called “extremal
zonotope set”, which is a subset of Rnm , whose complement has zero Lebesgue measure in Rnm .
Lemma 3.7. Given any b1 , . . . , bm ∈ Rn , there exists a 2-layer ReLU DNN with size 2m which
represents the function γZ(b1 ,...,bm ) (r).
Definition 8. For p ∈ N and a ∈ ∆pM , we define a function ha : R → R which is piecewise linear
over the segments (−∞, 0], [0, a1 ], [a1 , a2 ], . . . , [ap , M ], [M, +∞) defined as follows: ha (x) = 0
for all x ≤ 0, ha (ai ) = M (i mod 2), and ha (M ) = M − ha (ap ) and for x ≥ M , ha (x) is a linear
continuation of the piece over the interval [ap , M ]. Note that the function has p + 2 pieces, with the
leftmost piece having slope 0. Furthermore, for a1 , . . . , ak ∈ ∆pM , we denote the composition of
the functions ha1 , ha2 , . . . , hak by
Ha1 ,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1 .
Proposition 3.8. Given any tuple (b1 , . . . , bm ) ∈ S(n, m) and any point
[
(a1 , . . . , ak ) ∈ (∆w−1
M × ∆w−1
M × . . . × ∆w−1
M ),
M >0
| {z }
k times
n 1 k 1 m
the function Ha1 ,...,ak ◦ γZ(b1 ,...,bm ) has (m −
ZONOTOPEk,w,m [a , . . . , a , b , . . . , b ] :=
1)n−1 wk pieces and it can be represented by a k + 2 layer ReLU DNN with size 2m + wk.

6
Published as a conference paper at ICLR 2018

Finally, we are ready to state the main result of this section.


Theorem 3.9. For every tuple of natural numbers n, k, m ≥ 1 and w ≥ 2, there exists a family of
Rn → R functions, which we call ZONOTOPEnk,w,m with the following properties:

(i) Every f ∈ ZONOTOPEnk,w,m is representable by a ReLU DNN of depth k + 2 and size


P 
n−1 m−1
2m + wk, and has i=0 i wk pieces.

(ii) Consider any f ∈ ZONOTOPEnk,w,m . If f is represented by a (k 0 + 1)-


layerDNN for any k ≤ k, then this (k 0 +
0
 1)-layer DNN has size at least
k
1
1 0 kk0 n (1− n ) k10 w k0 0
max 2 (k w ) · (m − 1) − 1 , n1/k0 k .
(iii) The family ZONOTOPEnk,w,m is in one-to-one correspondence with
[
S(n, m) × (∆w−1
M × ∆w−1
M × . . . × ∆w−1
M ).
M >0
| {z }
k times

Comparison to the results in (Montufar et al., 2014)

Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have
width at least as big as the input dimensionality n. In contrast, we do not impose such restrictions
and the network size in our construction is independent of the input dimensionality. Thus our result
probes networks with bottleneck architectures whose complexity cant be seen from their result.
Secondly, in terms of our complexity measure, there seem to be regimes where our bound does
n
better. One such regime, for example, is when n ≤ w < 2n and k ∈ Ω( log(n) ), by setting in our
construction m < n.
Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly
parameterized family of functions other than by introducing small perturbations of the construc-
tion in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one
correspondence with a well-understood manifold like the higher-dimensional torus.

4 T RAINING 2- LAYER Rn → R R E LU DNN S TO GLOBAL OPTIMALITY


In this section we consider the following empirical risk minimization problem. Given D data points
(xi , yi ) ∈ Rn × R, i = 1, . . . , D, find the function f represented by 2-layer Rn → R ReLU DNNs
of width w, that minimizes the following optimization problem
D D
1 X 1 X 
min `(f (xi ), yi ) ≡ min ` T2 (σ(T1 (xi ))), yi (4.1)
f ∈F{n,w,1} D T1 ∈An , T2 ∈Lw D
w 1
i=1 i=1

where ` : R × R → R is a convex loss function (common loss functions are the squared loss,
`(y, y 0 ) = (y − y 0 )2 , and the hinge loss function given by `(y, y 0 ) = max{0, 1 − yy 0 }). Our main
result of this section gives an algorithm to solve the above empirical risk minimization problem to
global optimality.
Theorem 4.1. There exists an algorithm to find a global optimum of Problem 4.1 in time
O(2w (D)nw poly(D, n, w)). Note that the running time O(2w (D)nw poly(D, n, w)) is polynomial
in the data size D for fixed n, w.

Proof Sketch: A full proof of Theorem 4.1 is included in Appendix C. Here we provide a sketch
of the proof. When the empirical risk minimization problem is viewed as an optimization problem
in the space of weights of the ReLU DNN, it is a nonconvex, quadratic problem. However, one can
instead search over the space of functions representable by 2-layer DNNs by writing them in the
form similar to (2.1). This breaks the problem into two parts: a combinatorial search and then a
convex problem that is essentially linear regression with linear inequality constraints. This enables
us to guarantee global optimality.

7
Published as a conference paper at ICLR 2018

Algorithm 1 Empirical Risk Minimization


1: function ERM(D) . Where D = {(xi , yi )}D n
i=1 ⊂ R × R
2: S = {+1, −1}w . All possible instantiations of top layer weights
3: P i = {(P+i , P−i )}, i = 1, . . . , w . All possible partitions of data into two parts
4: P = P1 × P2 × · · · × Pw
5: count = 1 . Counter
6: for s ∈ S do
7: for {(P+i , P−i )}wi=1 ∈ P do
 D
X X
minimize: `(si (ãi · xj + b̃i ), yj )




 ã,b̃
j=1 i
i:j∈P+
8: loss(count) =
ãi · xj + b̃i ≤ 0 ∀j ∈ P−i



 subject to:


ãi · xj + b̃i ≥ 0 ∀j ∈ P+i
9: count + +
10: end for
11: OPT = argmin loss(count)
12: end for
13: return {ã}, {b̃}, s corresponding to OPT’s iterate
14: end function

Let T1 (x) = Ax + b and T2 (y) = a0 · y for A ∈ Rw×n and b, a0 ∈ Rw . If we denote the i-th row
of the matrix A by ai , and write bi , a0i to denote the i-th coordinates of the vectors b, a0 respectively,
due to homogeneity of ReLU gates, the network output can be represented as
w
X w
X
f (x) = a0i max{0, ai · x + bi } = si max{0, ãi · x + b̃i }.
i=1 i=1

where ãi ∈ Rn , b̃i ∈ R and si ∈ {−1, +1} for all i = 1, . . . , w. For any hidden node i ∈
{1 . . . , w}, the pair (ãi , b̃i ) induces a partition P i := (P+i , P−i ) on the dataset, given by P−i = {j :
ãi · xj + b˜i ≤ 0} and P+i = {1, . . . , D}\P−i . Algorithm 1 proceeds by generating all combinations
of the partitions P i as well as the top layer weights s ∈ {+1, −1}w , and minimizing the loss
PD P i i i
j=1 i:j∈P i `(si (ã · xj + b̃i ), yj ) subject to the constraints ã · xj + b̃i ≤ 0 ∀j ∈ P− and
+

ãi · xj + b̃i ≥ 0 ∀j ∈ P+i which are imposed for all i = 1, . . . , w, which is a convex program.
Algorithm 1 implements the empirical risk minimization (ERM) rule for training ReLU DNN with
one hidden layer. To the best of our knowledge there is no other known algorithm that solves
the ERM problem to global optimality. We note that due to known hardness results exponential
dependence on the input dimension is unavoidable Blum & Rivest (1992); Shalev-Shwartz & Ben-
David (2014); Algorithm 1 runs in time polynomial in the number of data points. To the best of
our knowledge there is no hardness result known which rules out empirical risk minimization of
deep nets in time polynomial in circuit size or data size. Thus our training result is a step towards
resolving this gap in the complexity literature.
A related result for improperly learning ReLUs has been recently obtained by Goel et al (Goel et al.,
2016). In contrast, our algorithm returns a ReLU DNN from the class being learned. Another
difference is that their result considers the notion of reliable learning as opposed to the empirical
risk minimization objective considered in (4.1).

5 D ISCUSSION
The running time of the algorithm that we give in this work to find the exact global minima of a
two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w.
The exponential dependence on n can not be removed unless P = N P ; see Shalev-Shwartz &
Ben-David (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of
any complexity results which would rule out the possibility of an algorithm which trains to global
optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming
that the input dimension is a fixed constant. Resolving this dependence on network size would be
another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good

8
Published as a conference paper at ICLR 2018

open question for future research, in our opinion. Perhaps an even better breakthrough would be
to get optimal training algorithms for DNNs with two or more hidden layers and this seems like
a substantially harder nut to crack. It would also be a significant breakthrough to get gap results
between consecutive constant depths or between logarithmic and constant depths.

ACKNOWLEDGMENTS

We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version
of the paper, which affected the complexity results for the number of linear regions in our construc-
tions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and
Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has
been immensely influenced by the perspectives gained during those extremely helpful discussions.
Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora
was supported in part by NSF BIGDATA grant IIS-1546482.

R EFERENCES
Eric Allender. Complexity theory lecture notes. https://www.cs.rutgers.edu/
˜allender/lecture.notes/, 1998.
Martin Anthony and Peter L. Bartlett. Neural network learning: Theoretical foundations. Cam-
bridge University Press, 1999.
Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge Uni-
versity Press, 2009.
Avrim L. Blum and Ronald L. Rivest. Training a 3-node neural network is np-complete. Neural
Networks, 5(1):117–127, 1992.
George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control,
signals and systems, 2(4):303–314, 1989.
George E. Dahl, Tara N. Sainath, and Geoffrey E. Hinton. Improving deep neural networks for lvcsr
using rectified linear units and dropout. In 2013 IEEE International Conference on Acoustics,
Speech and Signal Processing, pp. 8609–8613. IEEE, 2013.
Bhaskar DasGupta, Hava T. Siegelmann, and Eduardo Sontag. On the complexity of training neural
networks with continuous activation functions. IEEE Transactions on Neural Networks, 6(6):
1490–1504, 1995.
Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In 29th
Annual Conference on Learning Theory, pp. 907–940, 2016.
Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in polyno-
mial time. arXiv preprint arXiv:1611.10258, 2016.
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. arXiv preprint arXiv:1302.4389, 2013.
Benjamin D. Haeffele and René Vidal. Global optimality in tensor factorization, deep learning, and
beyond. arXiv preprint arXiv:1506.07540, 2015.
Boris Hanin. Universal function approximation by deep neural nets with bounded width and relu
activations. arXiv preprint arXiv:1708.02691, 2017.
Johan Hastad. Almost optimal lower bounds for small depth circuits. In Proceedings of the eigh-
teenth annual ACM symposium on Theory of computing, pp. 6–20. ACM, 1986.
Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,
Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks
for acoustic modeling in speech recognition: The shared views of four research groups. IEEE
Signal Processing Magazine, 29(6):82–97, 2012.
Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief
nets. Neural computation, 18(7):1527–1554, 2006.
Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4
(2):251–257, 1991.

9
Published as a conference paper at ICLR 2018

Stasys Jukna. Boolean function complexity: advances and frontiers, volume 27. Springer Science
& Business Media, 2012.
Daniel M. Kane and Ryan Williams. Super-linear gate and super-quadratic wire lower bounds for
depth-two and depth-three threshold circuits. arXiv preprint arXiv:1511.07860, 2015.
Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110,
2016.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo-
lutional neural networks. In Advances in neural information processing systems, pp. 1097–1105,
2012.
Quoc V. Le. Building high-level features using large scale unsupervised learning. In 2013 IEEE
international conference on acoustics, speech and signal processing, pp. 8595–8598. IEEE, 2013.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444,
2015.
Shiyu Liang and R Srikant. Why deep neural networks for function approximation? 2016.
Jiri Matousek. Lectures on discrete geometry, volume 212. Springer Science & Business Media,
2002.
Guido F. Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear
regions of deep neural networks. In Advances in neural information processing systems, pp.
2924–2932, 2014.
Razvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of response regions of deep
feed forward networks with piece-wise linear activations. arXiv preprint arXiv:1312.6098, 2013.
Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the ex-
pressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.
Alexander A. Razborov. Lower bounds on the size of bounded depth circuits over a complete basis
with logical addition. Mathematical Notes, 41(4):333–338, 1987.
Benjamin Rossman, Rocco A. Servedio, and Li-Yang Tan. An average-case depth hierarchy theo-
rem for boolean circuits. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual
Symposium on, pp. 1030–1048. IEEE, 2015.
H.L. Royden and P.M. Fitzpatrick. Real Analysis. Prentice Hall, 2010.
Itay Safran and Ohad Shamir. Depth-width tradeoffs in approximating natural functions with neural
networks. In International Conference on Machine Learning, pp. 2979–2987, 2017.
Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. In International Confer-
ence on Artificial Intelligence and Statistics (AISTATS), volume 1, pp. 3, 2009.
R. Saptharishi. A survey of lower bounds in arithmetic circuit complexity, 2014.
Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. Over-
feat: Integrated recognition, localization and detection using convolutional networks. In Interna-
tional Conference on Learning Representations (ICLR 2014). arXiv preprint arXiv:1312.6229,
2014.
Thiago Serra, Christian Tjandraatmadja, and Srikumar Ramalingam. Bounding and counting linear
regions of deep neural networks. arXiv preprint arXiv:1711.02114, 2017.
Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algo-
rithms. Cambridge university press, 2014.
Ohad Shamir. Distribution-specific hardness of learning neural networks. arXiv preprint
arXiv:1609.01037, 2016.
Amir Shpilka and Amir Yehudayoff. Arithmetic circuits: A survey of recent results and open ques-
tions. Foundations and Trends R in Theoretical Computer Science, 5(3–4):207–388, 2010.

Roman Smolensky. Algebraic methods in the theory of lower bounds for boolean circuit complexity.
In Proceedings of the nineteenth annual ACM symposium on Theory of computing, pp. 77–82.
ACM, 1987.
Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning
Research, 15(1):1929–1958, 2014.

10
Published as a conference paper at ICLR 2018

Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.
In Advances in neural information processing systems, pp. 3104–3112, 2014.
Matus Telgarsky. Representation benefits of deep feedforward networks. arXiv preprint
arXiv:1509.08101, 2015.
Matus Telgarsky. benefits of depth in neural networks. In 29th Annual Conference on Learning
Theory, pp. 1517–1539, 2016.
Shuning Wang. General constructive representations for continuous piecewise-linear functions.
IEEE Transactions on Circuits and Systems I: Regular Papers, 51(9):1889–1896, 2004.
Shuning Wang and Xusheng Sun. Generalization of hinging hyperplanes. IEEE Transactions on
Information Theory, 51(12):4425–4431, 2005.
Dmitry Yarotsky. Error bounds for approximations with deep relu networks. arXiv preprint
arXiv:1610.01145, 2016.
Günter M. Ziegler. Lectures on polytopes, volume 152. Springer Science & Business Media, 1995.

A E XPRESSING PIECEWISE LINEAR FUNCTIONS USING R E LU DNN S


Proof of Theorem 2.2. Any continuous piecewise linear function R → R which has m pieces can be
specified by three pieces of information, (1) sL the slope of the left most piece, (2) the coordinates
of the non-differentiable points specified by a (m − 1)−tuple {(ai , bi )}m−1
i=1 (indexed from left to
right) and (3) sR the slope of the rightmost piece. A tuple (sL , sR , (a1 , b1 ), . . . , (am−1 , bm−1 )
uniquely specifies a m piecewise linear function from R → R and vice versa. Given such a tuple,
we construct a 2-layer DNN which computes the same piecewise linear function.
One notes that for any a, r ∈ R, the function

0 x≤a
f (x) = (A.1)
r(x − a) x > a
is equal to sgn(r) max{|r|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN with size
1. Similarly, any function of the form,

t(x − a) x ≤ a
g(x) = (A.2)
0 x>a
is equal to − sgn(t) max{−|t|(x − a), 0}, which can be implemented by a 2-layer ReLU DNN
with size 1. The parameters r, t will be called the slopes of the function, and a will be called
the breakpoint of the function.If we can write the given piecewise linear function as a sum of m
functions of the form (A.1) and (A.2), then by Lemma D.2 we would be done. It turns out that
such a decomposition of any p piece PWL function h : R → R as a sum of p flaps can always
be arranged where the breakpoints of the p flaps all are all contained in the p − 1 breakpoints of
h. First, observe that adding a constant to a function does not change the complexity of the ReLU
DNN expressing it, since this corresponds to a bias on the output node. Thus, we will assume
that the value of h at the last break point am−1 is bm−1 = 0. We now use a single function f of
the form (A.1) with slope r and breakpoint a = am−1 , and m − 1 functions g1 , . . . , gm−1 of the
form (A.2) with slopes t1 , . . . , tm−1 and breakpoints a1 , . . . , am−1 , respectively. Thus, we wish
to express h = f + g1 + . . . + gm−1 . Such a decomposition of h would be valid if we can find
values for r, t1 , . . . , tm−1 such that (1) the slope of the above sum is = sL for x < a1 , (2) the
slope of the above sum is = sR for x > am−1 , and (3) for each i ∈ {1, 2, 3, .., m − 1} we have
bi = f (ai ) + g1 (ai ) + . . . + gm−1 (ai ).
The above corresponds to asking for the existence of a solution to the following set of simultaneous
linear equations in r, t1 , . . . , tm−1 :
m−1
X
sR = r, sL = t1 + t2 + . . . + tm−1 , bi = tj (aj−1 − aj ) for all i = 1, . . . , m − 2
j=i+1

It is easy to verify that the above set of simultaneous linear equations has a unique solution. Indeed,
r must equal sR , and then one can solve for t1 , . . . , tm−1 starting from the last equation bm−2 =

11
Published as a conference paper at ICLR 2018

tm−1 (am−2 − am−1 ) and then back substitute to compute tm−2 , tm−3 , . . . , t1 . The lower bound
of p − 1 on the size for any 2-layer ReLU DNN that expresses a p piece function follows from
Lemma D.6.

One can do better in terms of size when the rightmost piece of the given function is flat, i.e., sR = 0.
In this case r = 0, which means that f = 0; thus, the decomposition of h above is of size p − 1.
A similar construction can be done when sL = 0. This gives the following statement which will be
useful for constructing our forthcoming hard functions.
Corollary A.1. If the rightmost or leftmost piece of a R → R piecewise linear function has 0 slope,
then we can compute such a p piece function using a 2-layer DNN with size p − 1.

Proof of theorem 2.3. Since any piecewise linear function Rn → R is representable by a ReLU
DNN by Corollary 2.1, the proof simply follows from the fact that the family of continuous piece-
wise linear functions is dense in any Lp (Rn ) space, for 1 ≤ p ≤ ∞.

B B ENEFITS OF D EPTH

B.1 C ONSTRUCTING A CONTINUUM OF HARD FUNCTIONS FOR R → R R E LU DNN S AT


EVERY DEPTH AND EVERY WIDTH

Lemma B.1. For any M > 0, p ∈ N, k ∈ N and a1 , . . . , ak ∈ ∆pM , if we compose the functions
ha1 , ha2 , . . . , hak the resulting function is a piecewise linear function with at most (p + 1)k + 2
pieces, i.e.,
Ha1 ,...,ak := hak ◦ hak−1 ◦ . . . ◦ ha1
is piecewise linear with at most (p + 1)k + 2 pieces, with (p + 1)k of these pieces in the range [0, M ]
(see Figure 2). Moreover, in each piece in the range [0, M ], the function is affine with minimum
value 0 and maximum value M .

Proof. Simple induction on k.

Proof of Theorem 3.2. Given k ≥ 1 and w ≥ 2, choose any point


[
(a1 , . . . , ak ) ∈ (∆w−1
M × ∆w−1
M × . . . × ∆w−1
M ).
M >0
| {z }
k times

By Definition 8, each hai , i = 1, . . . , k is a piecewise linear function with w + 1 pieces and the
leftmost piece having slope 0. Thus, by Corollary A.1, each hai , i = 1, . . . , k can be represented by
a 2-layer ReLU DNN with size w. Using Lemma D.1, Ha1 ,...,ak can be represented by a k + 1 layer
DNN with size wk; in fact, each hidden layer has exactly w nodes.

Proof of Theorem 3.1. Follows from Theorem 3.2 and Lemma D.6.

Proof of Theorem 3.5. Given k ≥ 1 and w ≥ 2 define q := wk and sq := ha ◦ ha ◦ . . . ◦ ha where


| {z }
k times
q−1
a = ( w1 , w2 , . . . , w−1
w ) ∈ ∆1 . Thus, sq is representable by a ReLU DNN of width w +1 and depth
k + 1 by Lemma D.1. In what follows, we want to give a lower bound on the `1 distance of sq from
any continuous p-piecewise linear comparator gp : R → R. The function sq contains b 2q c triangles
of width 2q and unit height. A p-piecewise linear function has p − 1 breakpoints in the interval [0, 1].
k
So that in at least b w2 c − (p − 1) triangles, gp has to be affine. In the following we demonstrate that

12
Published as a conference paper at ICLR 2018

0.8

0.6

0.4

0.2

0
-0.2 0 0.2 0.4 0.6 0.8 1 1.2

0.8

0.6

0.4

0.2

0
-0.2 0 0.2 0.4 0.6 0.8 1 1.2

0.8

0.6

0.4

0.2

0
-0.2 0 0.2 0.4 0.6 0.8 1 1.2

Figure 2: Top: ha1 with a1 ∈ ∆21 with 3 pieces in the range [0, 1]. Middle: ha2 with a2 ∈ ∆11 with
2 pieces in the range [0, 1]. Bottom: Ha1 ,a2 = ha2 ◦ ha1 with 2 · 3 = 6 pieces in the range [0, 1].
The dotted line in the bottom panel corresponds to the function in the top panel. It shows that for
every piece of the dotted graph, there is a full copy of the graph in the middle panel.

inside any triangle of sq , any affine function will incur an `1 error of at least 2w1 k .
Z 2i+2 Z 2k
wk w y2 − y1
|sq (x) − gp (x)|dx = sq (x) − (y1 + (x − 0) · 2 ) dx

x= 2ik x=0 wk
−0
w
Z 1k k
Z 2k k

w
xw − y1 − w x (y2 − y1 ) dx + w 2 − xwk − y1 − w x (y2 − y1 ) dx
k
=
x=0
2
x= 1k
2
w
Z 1 Z 2
1 z 1 z
= k − y − (y − y ) dz + 2 − z − y1 − (y2 − y1 ) dz

1 2 1 k
w z=0 2 w z=1 2
z

2y12 2(−2 + y1 )2
 
1
= k −3 + y1 + + y2 +
w 2 + y1 − y2 2 − y1 + y2
The above integral attains its minimum of 2w1 k at y1 = y2 = 21 . Putting together,
 k
wk − 1 − 2(p − 1) 1 2p − 1

w 1
kswk − gp k1 ≥ b c − (p − 1) · k
≥ k
= −
2 2w 4w 4 4wk
Thus, for any δ > 0,
wk − 4wk δ + 1 1 1 2p − 1
p≤ =⇒ 2p − 1 ≤ ( − δ)4wk =⇒ − ≥ δ =⇒ kswk − gp k1 ≥ δ.
2 4 4 4wk
The result now follows from Lemma D.6.

B.2 A CONTINUUM OF HARD FUNCTIONS FOR Rn → R FOR n ≥ 2

Proof of Lemma 3.7. By Theorem 3.6 part 3., γZ(b1 ,...,bm ) (r) = |hr, b1 i| + . . . + |hr, bm i|. It
suffices to observe
|hr, b1 i| + . . . + |hr, bm i| = max{hr, b1 i, −hr, b1 i} + . . . + max{hr, bm i, −hr, bm i}.

Proof of Proposition 3.8. The fact that ZONOTOPEnk,w,m [a1 , . . . , ak , b1 , . . . , bm ] can be repre-
sented by a k + 2 layer ReLU DNN with size 2m + wk follows from Lemmas 3.7 and D.1. The
Pn−1
number of pieces follows from the fact that γZ(b1 ,...,bm ) has i=0 m−1

i distinct linear pieces by
parts 1. and 2. of Theorem 3.6, and Ha1 ,...,ak has wk pieces by Lemma B.1.

Proof of Theorem 3.9. Follows from Proposition 3.8.

13
Published as a conference paper at ICLR 2018

C E XACT E MPIRICAL R ISK M INIMIZATION


Proof of Theorem 4.1. Let ` : R → R be any convex loss function, and let (x1 , y1 ), . . . , (xD , yD ) ∈
Rn × R be the given D data points. As stated in (4.1), the problem requires us to find an affine
transformation T1 : Rn → Rw and a linear transformation T2 : Rw → R, so as to minimize the
empirical loss as stated in (4.1). Note that T1 is given by a matrix A ∈ Rw×n and a vector b ∈ Rw
so that T (x) = Ax + b for all x ∈ Rn . Similarly, T2 can be represented by a vector a0 ∈ Rw such
that T2 (y) = a0 · y for all y ∈ Rw . If we denote the i-th row of the matrix A by ai , and write bi , a0i
to denote the i-th coordinates of the vectors b, a0 respectively, we can write the function represented
by this network as
w
X w
X
f (x) = a0i max{0, ai · x + bi } = sgn(a0i ) max{0, (|a0i |ai ) · x + |a0i |bi }.
i=1 i=1

In other words, the family of functions over which we are searching is of the form
w
X
f (x) = si max{0, ãi · x + b̃i } (C.1)
i=1

where ãi ∈ Rn , bi ∈ R and si ∈ {−1, +1} for all i = 1, . . . , w. We now make the following
observation. For a given data point (xj , yj ) if ãi · xj + b̃i ≤ 0, then the i-th term of (C.1) does
not contribute to the loss function for this data point (xj , yj ). Thus, for every data point (xj , yj ),
there exists a set Sj ⊆ {1, . . . , w} such that f (xj ) = i∈Sj si (ãi · xj + b̃i ). In particular, if we are
P

given the set Sj for (xj , yj ), then the expression on the right hand side of (C.1) reduces to a linear
function of ãi , b̃i . For any fixed i ∈ {1, . . . , w}, these sets Sj induce a partition of the data set into
two parts. In particular, we define P+i := {j : i ∈ Sj } and P−i := {1, . . . , D} \ P+i . Observe now
that this partition is also induced by the hyperplane given by ãi , b̃i : P+i = {j : ãi · xj + b̃i > 0}
and P+i = {j : ãi · xj + b̃i ≤ 0}. Our strategy will be to guess the partitions P+i , P−i for each
i = 1, . . . , w, and then do linear regression with the constraint that regression’s decision variables
ãi , b̃i induce the guessed partition.
More formally, the algorithm does the following. For each i = 1, . . . , w, the algorithm guesses a
partition of the data set (xj , yj ), j = 1, . . . , D by a hyperplane. Let us label the partitions as follows
(P+i , P−i ), i = 1, . . . , w. So, for each i = 1, . . . , w, P+i ∪ P−i = {1, . . . , D}, P+i and P−i are
disjoint, and there exists a vector c ∈ Rn and a real number δ such that P−i = {j : c · xj + δ ≤ 0}
and P+i = {j : c · xj + δ > 0}. Further, for each i = 1, . . . , w the algorithm selects a vector s in
{+1, −1}w .
For a fixed selection of partitions (P+i , P−i ), i = 1, . . . , w and a vector s in {+1, −1}w , the algorithm
solves the following convex optimization problem with decision variables ãi ∈ Rn , b̃i ∈ R for
i = 1, . . . , w (thus, we have a total of (n + 1) · w decision variables). The feasible region of the
optimization is given by the constraints
ãi · xj + b̃i ≤ 0 ∀j ∈ P−i
(C.2)
ãi · xj + b̃i ≥ 0 ∀j ∈ P+i
which are imposed for all i = 1, . . . , w. Thus, we have a total of D · w constraints. Subject to
PD P
these constraints we minimize the objective j=1 i:j∈P i `(si (ãi · xj + b̃i ), yj ). Assuming the
+
loss function ` is a convex function in the first argument, the above objective is a convex function.
Thus, we have to minize a convex objective subject to the linear inequality constraints from (C.2).
We finally have to count how many possible partitions (P+i , P−i ) and vectors s the algorithm has
to search through. It is well-known Matousek (2002) that the total number of possible hyperplane
partitions of a set of size D in Rn is at most 2 D n
n ≤ D whenever n ≥ 2. Thus with a guess for each
i = 1, . . . , w, we have a total of at most D partitions. There are 2w vectors s in {−1, +1}w . This
nw

gives us a total of 2w Dnw guesses for the partitions (P+i , P−i ) and vectors s. For each such guess,
we have a convex optimization problem with (n + 1) · w decision variables and D · w constraints,
which can be solved in time poly(D, n, w). Putting everything together, we have the running time
claimed in the statement.

14
Published as a conference paper at ICLR 2018

The above argument holds only for n ≥ 2, since we used the inequality 2 D ≤ Dn which

n
only holds for n ≥ 2. For n = 1, a similar algorithm can be designed, but one which uses the
characterization achieved in Theorem 2.2. Let ` : R → R be any convex loss function, and let
(x1 , y1 ), . . . , (xD , yD ) ∈ R2 be the given D data points. Using Theorem 2.2, to solve problem (4.1)
it suffices to find a R → R piecewise linear function f with w pieces that minimizes the total loss.
In other words, the optimization problem (4.1) is equivalent to the problem

D
( )
X
min `(f (xi ), yi ) : f is piecewise linear with w pieces . (C.3)
i=1

We now use the observation that fitting piecewise linear functions to minimize loss is just a step
away from linear regression, which is a special case where the function is contrained to have exactly
one affine linear piece. Our algorithm will first guess the optimal partition of the data points such
that all points in the same class of the partition correspond to the same affine piece of f , and then
do linear regression in each class of the partition. Altenatively, one can think of this as guessing the
interval (xi , xi+1 ) of data points where the w − 1 breakpoints of the piecewise linear function will
lie, and then doing linear regression between the breakpoints.
More formally, we parametrize piecewise linear functions with w pieces by the w slope-intercept
values (a1 , b1 ), . . . , (a2 , b2 ), . . . , (aw , bw ) of the w different pieces. This means that between break-
points j and j + 1, 1 ≤ j ≤ w − 2, the function is given by f (x) = aj+1 x + bj+1 , and the first and
last pieces are a1 x + b1 and aw x + bw , respectively.
Define I to be the set of all (w − 1)-tuples (i1 , . . . , iw−1 ) of natural numbers such that 1 ≤ i1 ≤
. . . ≤ iw−1 ≤ D. Given a fixed tuple I = (i1 , . . . , iw−1 ) ∈ I, we wish to search through all piece-
wise linear functions whose breakpoints, in order, appear in the intervals (xi1 , xi1 +1 ), (xi2 , xi2 +1 ),
. . . , (xiw−1 , xiw−1 +1 ). Define also S = {−1, 1}w−1 . Any S ∈ S will have the following inter-
pretation: if Sj = 1 then aj ≤ aj+1 , and if Sj = −1 then aj ≥ aj+1 . Now for every I ∈ I
and S ∈ S, requiring a piecewise linear function that respects the conditions imposed by I and
S is easily seen to be equivalent to imposing the following linear inequalities on the parameters
(a1 , b1 ), . . . , (a2 , b2 ), . . . , (aw , bw ):

Sj (bj+1 − bj − (aj − aj+1 )xij ) ≥ 0


Sj (bj+1 − bj − (aj − aj+1 )xij +1 ) ≤ 0 (C.4)
Sj (aj+1 − aj ) ≥ 0

Let the set of piecewise linear functions whose breakpoints satisfy the above be denoted by PWL1I,S
for I ∈ I, S ∈ S.
Given a particular I ∈ I, we define

D1 := {xi : i ≤ i1 },
Dj := {xi : ij−1 < i ≤ i1 } j = 2, . . . , w − 1, .
Dw := {xi : i > iw−1 }

Observe that

XD w  X
X 
min{ `(f (xi )−yi ) : f ∈ PWL1I,S } = min{ `(aj ·xi +bj −yi ) : (aj , bj ) satisfy (C.4)}
i=1 j=1 i∈Dj
(C.5)
The right hand side of the above equation is the problem of minimizing a convex objective subject to
linear constraints. Now, to solve (C.3), we need to simply solve the problem (C.5) for all I ∈ I, S ∈
S and pick the minimum. Since |I| = D w
w = O(D ) and |S| = 2
w−1
we need to solve O(2w · Dw )
convex optimization problems, each taking time O(poly(D)). Therefore, the total running time is
O((2D)w poly(D)).

15
Published as a conference paper at ICLR 2018

D AUXILIARY L EMMAS

Now we will collect some straightforward observations that will be used often. The following oper-
ations preserve the property of being representable by a ReLU DNN.
Lemma D.1. [Function Composition] If f1 : Rd → Rm is represented by a d, m ReLU DNN with
depth k1 + 1 and size s1 , and f2 : Rm → Rn is represented by an m, n ReLU DNN with depth
k2 + 1 and size s2 , then f2 ◦ f1 can be represented by a d, n ReLU DNN with depth k1 + k2 + 1 and
size s1 + s2 .

Proof. Follows from (1.1) and the fact that a composition of affine transformations is another affine
transformation.

Lemma D.2. [Function Addition] If f1 : Rn → Rm is represented by a n, m ReLU DNN with


depth k + 1 and size s1 , and f2 : Rn → Rm is represented by a n, m ReLU DNN with depth k + 1
and size s2 , then f1 + f2 can be represented by a n, m ReLU DNN with depth k + 1 and size s1 + s2 .

Proof. We simply put the two ReLU DNNs in parallel and combine the appropriate coordinates of
the outputs.

Lemma D.3. [Taking maximums/minimums] Let f1 , . . . , fm : Rn → R be functions that can each


be represented by Rn → R ReLU DNNs with depths ki + 1 and size si , i = 1, . . . , m. Then the
function f : Rn → R defined as f (x) := max{f1 (x), . . . , fm (x)} can be represented by a ReLU
DNN of depth at most max{k1 , . . . , km } + log(m) + 1 and size at most s1 + . . . sm + 4(2m − 1).
Similarly, the function g(x) := min{f1 (x), . . . , fm (x)} can be represented by a ReLU DNN of
depth at most max{k1 , . . . , km } + dlog(m)e + 1 and size at most s1 + . . . sm + 4(2m − 1).

Proof. We prove this by induction on m. The base case m = 1 is trivial. For m ≥ 2, consider
g1 := max{f1 , . . . , fb m2 c } and g2 := max{fb m2 c+1 , . . . , fm }. By the induction hypothesis (since
bm m
2 c, d 2 e < m when m ≥ 2), g1 and g2 can be represented by ReLU DNNs of depths at most
max{k1 , . . . , kb m2 c } + dlog(b m m
2 c)e + 1 and max{kb 2 c+1 , . . . , km } + dlog(d 2 e)e + 1 respectively,
m

and sizes at most s1 + . . . sb m2 c + 4(2b m m


2 c − 1) and sb 2 c+1 + . . . + sm + 4(2b 2 c − 1), respectively.
m

n 2
Therefore, the function G : R → R given by G(x) = (g1 (x), g2 (x)) can be implemented by a
ReLU DNN with depth at most max{k1 , . . . , km } + dlog(d m 2 e)e + 1 and size at most s1 + . . . +
sm + 4(2m − 2).
We now show how to represent the function T : R2 → R defined as T (x, y) = max{x, y} =
x+y |x−y|
2 + 2 by a 2-layer ReLU DNN with size 4 – see Figure 3. The result now follows from the
fact that f = T ◦ G and Lemma D.1.

1
1
1 2
Input x1 -1
− 12
-1 x1 +x2 |x1 −x2 |
2 + 2
-1 1
2
Input x2 1
1
1 2

-1

x1 +x2 |x1 −x2 |


Figure 3: A 2-layer ReLU DNN computing max{x1 , x2 } = 2 + 2
Lemma D.4. Any affine transformation T : Rn → Rm is representable by a 2-layer ReLU DNN of
size 2m.

16
Published as a conference paper at ICLR 2018

Proof. Simply use the fact that T = (I ◦ σ ◦ T ) + (−I ◦ σ ◦ (−T )), and the right hand side can be
represented by a 2-layer ReLU DNN of size 2m using Lemma D.2.
Lemma D.5. Let f : R → R be a function represented by a R → R ReLU DNN with depth
k + 1 and widths w1 , . . . , wk of the k hidden layers. Then f is a PWL function with at most
2k−1 · (w1 + 1) · w2 · . . . · wk pieces.

0.8

0.6

0.4

0.2

0
-0.2 0 0.2 0.4 0.6 0.8 1 1.2

0.6

0.4

0.2

-0.2

-0.4
-0.2 0 0.2 0.4 0.6 0.8 1 1.2

Figure 4: The number of pieces increasing after activation. If the blue function is f , then the red
function g = max{0, f + b} has at most twice the number of pieces as f for any bias b ∈ R.

Proof. We prove this by induction on k. The base case is k = 1, i.e, we have a 2-layer ReLU DNN.
Since every activation node can produce at most one breakpoint in the piecewise linear function, we
can get at most w1 breakpoints, i.e., w1 + 1 pieces.
Now for the induction step, assume that for some k ≥ 1, any R → R ReLU DNN with depth k + 1
and widths w1 , . . . , wk of the k hidden layers produces at most 2k−1 · (w1 + 1) · w2 · . . . · wk pieces.
Consider any R → R ReLU DNN with depth k + 2 and widths w1 , . . . , wk+1 of the k + 1 hidden
layers. Observe that the input to any node in the last layer is the output of a R → R ReLU DNN
with depth k + 1 and widths w1 , . . . , wk . By the induction hypothesis, the input to this node in the
last layer is a piecewise linear function f with at most 2k−1 · (w1 + 1) · w2 · . . . · wk pieces. When we
apply the activation, the new function g(x) = max{0, f (x)}, which is the output of this node, may
have at most twice the number of pieces as f , because each original piece may be intersected by the
x-axis; see Figure 4. Thus, after going through the layer, we take an affine combination of wk+1
functions, each with at most 2 · (2k−1 · (w1 + 1) · w2 · . . . · wk ) pieces. In all, we can therefore get at
most 2·(2k−1 ·(w1 +1)·w2 ·. . .·wk )·wk+1 pieces, which is equal to 2k ·(w1 +1)·w2 ·. . .·wk ·wk+1 ,
and the induction step is completed.

Lemma D.5 has the following consequence about the depth and size tradeoffs for expressing func-
tions with agiven number of pieces.
Lemma D.6. Let f : R → R be a piecewise linear function with p pieces. If f is represented by a
ReLU DNN with depth k + 1, then it must have size at least 21 kp1/k − 1. Conversely, any piecewise
linear function f that represented by a ReLU DNN of depth k + 1 and size at most s, can have at
most ( 2s k
k ) pieces.

Proof. Let widths of the k hidden layers be w1 , . . . , wk . By Lemma D.5, we must have
2k−1 · (w1 + 1) · w2 · . . . · wk ≥ p. (D.1)
By the AM-GM inequality, minimizing the size w1 + w2 + . . . + wk subject to (D.1), means setting
w1 + 1 = w2 = . . . = wk . This implies that w1 + 1 = w2 = . . . = wk ≥ 12 p1/k . The first
statement follows. The second statement follows using the AM-GM inequality again, this time with
a restriction on w1 + w2 + . . . + wk .

17

You might also like