Spike Neural Network With Temporal Coding

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)

TDSNN: From Deep Neural Networks to Deep


Spike Neural Networks with Temporal-Coding
Lei Zhang,1,2,3 Shengyuan Zhou,1,2,3 Tian Zhi,1,3 Zidong Du,1,3 Yunji Chen∗1,2
1
Institute of Computing Technology, Chinese Academy of Sciences
2
University of Chinese Academy of Sciences
3
Cambricon Tech. Ltd
{zhanglei03, zhousy, zhitian, duzidong, cyj}@ict.ac.cn

Abstract Unlike SNNs, deep neural networks (DNNs) have been


able to perform the state-of-the-art results on many complex
Continuous-valued deep convolutional networks (DNNs) can tasks such as image recognition (Krizhevsky, Sutskever, and
be converted into accurate rate-coding based spike neural net-
works (SNNs). However, the substantial computational and
Hinton 2012; Krizhevsky 2009; Simonyan and Zisserman
energy costs, which is caused by multiple spikes, limit their 2014; He et al. 2015), speech recognition (Abdel-Hamid et
use in mobile and embedded applications. And recent works al. 2012; Sainath et al. 2013; Hinton et al. 2012), natural lan-
have shown that the newly emerged temporal-coding based guage processing (Kim 2014; Severyn and Moschitti 2015)
SNNs converted from DNNs can reduce the computational and so on. But heavy computation load promotes researchers
load effectively. In this paper, we propose a novel method to find more efficient approach to deploy them in mobiles or
to convert DNNs to temporal-coding SNNs, called TDSNN. embedded systems. This inspires the SNN researchers that
Combined with the characteristic of the leaky integrate-and- a fully-trained DNN might be slightly tuned to be directly
fire (LIF) neural model, we put forward a new coding princi- converted to a SNN without complicated training proce-
ple Reverse Coding and design a novel Ticking Neuron mech- dure. Beginning with the work of (Perezcarrasco et al. 2013),
anism. According to our evaluation, our proposed method
achieves 42% total operations reduction on average in large
where DNN units were translated into biologically inspired
networks comparing with DNNs with no more than 0.5% ac- spiking units with leaks and refractory periods, continu-
curacy loss. The evaluation shows that TDSNN may prove ous efforts have been made to realize this idea. After a se-
to be one of the key enablers to make the adoption of SNNs ries of success in transferring deep networks like Lenet and
widespread. VGG-16 (Cao, Chen, and Khosla 2015; Diehl et al. 2015;
Rueckauer et al. 2017), now the rate-coding based SNN
can achieve state-of-the-art performance with minor accu-
Introduction racy loss even in the conversion of complicated layers like
SNNs are considered to be the third generation of neural net- Max-Pool, BatchNorm and SoftMax.
works which are more powerful on processing of both spa- However, weak points in rate-coding based SNN are obvi-
tial and temporal information (Maass 1997). However, it is ous. Firstly, the rate-coding based SNN could become more
difficult to train an SNN directly. A series of works have accurate with the increasement of the simulation duration
tuned the backpropagation algorithms to fit for the SNN and the average firing rate of its neurons. But it may lose po-
models (Bohte, Kok, and Han 2000; Lee, Delbruck, and tential performance advantage over the DNNs as the firing
Pfeiffer 2016). Although they achieve satisfactory results rates increase (Rueckauer and Liu 2018). Secondly, part of
in simple tasks like MNIST (Lecun et al. 1998), the train- the accuracy loss in the conversion occurs at the parameter
ing methods can hardly be scaled into deep SNN models determination procedure. In rate-coding based SNN, deter-
to solve more complex tasks like ImageNet (Russakovsky mination of important parameters like firing threshold seri-
et al. 2014). Meanwhile, the researches on the biological ously affect the final accuracy while non deterministic meth-
training methods also meet the same problem. These works ods have been proposed to eliminate the loss.
have put many existing brain mechanisms into the modeling This poses challenges to researchers to find conversion
of SNN including spike timing dependent plasticity (STDP) methods based on another coding scheme—temporal cod-
and long-term potentiation or depression, short-term facilita- ing. Neurons based on temporal-coding make full use of the
tion or depression, hetero-synaptic plasticity, etc. Recently, spike time to complete the transmission of information, and
Zhang et al. (Zhang et al. 2018) proposed a novel multi- the number of spikes is significantly reduced. Also, tempo-
layer SNN model which applies biological mechanisms and ral coding has been proved to be efficient in computing even
achieves 98.52% on MNIST dataset, but its performance is in biological brains (Van and Thorpe 2001). The temporal-
unknown when applied on larger datasets. coding based neurons apply a so-called time-to-first-spike

Yunji Chen ([email protected]) is the corresponding author. (TTFS) scheme and each neuron fires at most once dur-
Copyright ⃝ c 2019, Association for the Advancement of Artificial ing the forward inference (Thorpe, Delorme, and Rullen
Intelligence (www.aaai.org). All rights reserved. 2001). Obviously, temporal-coding based SNN is more com-

1319
putationally efficient than rate-coding based SNN. Recently Considering that the inputs are instantaneous currents that
Rueckauer and Liu (Rueckauer and Liu 2018) proposed a are generated from discrete-time neural spikes in an SNN
conversion method using sparse temporal coding, but it is model, the equation (1) turns to:
only suitable for shallow neural network models.
In this paper, we put forward a novel conversion method V (t) dV (t) ∑
+ = ωi · Ii (t), (2)
called TDSNN, transferring DNNs to temporal-coding SNNs L dt i
with only trivial accuracy loss. We also propose a new en-
coding scheme Reverse Coding along with a novel Tick- where ωi denote the synapse strength of the connections
ing Neuron mechanism. We address three challenges during in SNN, L is the leak-related constant. If there exists no
the process of converting as follows. Firstly, we maintain continuous spikes, the neuron potential will decay as time
the coding consistency in the entire network after mapping. goes by. The LIF model processes the time information by
Secondly, we eliminate the conversion error from DNNs to adding a leaky term, reflecting the diffusion of ions that oc-
SNNs. Thirdly, we maintain accuracy in SNN with spike- curs through the membrane when some equilibrium is not
once neurons. The details will be discussed later in the Re- reached in the cell. If no spike occurs during the time inter-
verse Coding part. According to our evaluation, we achieve val from t1 to t2 , the potential of the LIF neuron will change
42% total operations reduction on average in large net- as following:
works comparing with DNNs with no more than 0.5% ac- t2 − t1
curacy loss. The experimental results show that our method V (t2 ) = V (t1 ) · exp(− ). (3)
L
is an efficient and high-performance conversion method for
temporal-coding based SNN. And if no no spike occurs in this neuron, the final potential
accumulation P at time Til will be:
Background ∑ Til − ti
In this section, we will introduce two common neural mod- P (Til ) = ωi · exp(− ). (4)
i
L
els. One is integrate-and-fire(IF) model, the other is leaky
integrate-and-fire (LIF) model. This implies that the pre-synaptic neuron’s potential con-
tribution to post-synaptic neurons is positively correlated
Integrate-and-fire (IF) model with the firing time of pre-synaptic spike. This nonlinear
We introduce integrate-and-fire(IF) model at first, which is characteristic has never been put good use in previous con-
widely used in previous rate-coding based conversion ap- version methods. We make full use of these features to build
proaches. In IF models, each neuron will accumulate po- a conversion method for temporal-coding SNN.
tential from input current and fire a spike when its poten-
tial reaches the threshold. Although successful conversions Theoretical Analysis
have been made from DNN neurons with ReLU activations
to IF neurons, the process of the transformation still ex- In this section, we present details of our conversion method.
ists unsatisfactory problems. For example, it is difficult to We first introduce the Reverse Coding guideline and ticking
accurately determine thresholds in these conversion meth- neuron mechanism, respectively. Then, we analyze theoreti-
ods. Although Diehl et al. (Diehl et al. 2015) has proposed cally on all the requirements of conversion method, includ-
data-based and model-based threshold determination meth- ing temporal-coding function and important parameters.
ods, the converted SNN still suffers unstable accuracy loss
comparing with the re-trained DNN model. What’s worse,
Reverse Coding
IF neurons implement no time-dependent memory. Once a There has been massive number of influential works in the
neuron receives a below-threshold action potential at some practice of encoding input into a single spike before our
time, it will keep retaining that voltage until it fires. Obvi- work. For example, Van et al. (Van and Thorpe 2001) pro-
ously, it is not in line with the neural behavior in neural sci- posed a rank order based encoding scheme, in which the in-
ence. puts are encoded into specific spiking order. The potential
contribution of later-spike neurons will be punished with a
Leaky Integrate-and-fire (LIF) model delay factor so that the ones fire earlier will contribute a
To better model neural behavior in the spike-threshold major part of potential accumulation to post-synaptic neu-
model, researchers in neural science field proposed a sim- rons. Combined with LIF neurons, this coding method has
plified model—Leaky Integrate-and-fire (LIF) model (Koch been proven to achieve a Gaussian-difference-filtering ef-
and Segev 1998), which is derived from the famous fect, which is helpful for extracting features. But these
Hodgkin–Huxley model (Hodgkin and Huxley 1990). Gen- guidelines failed to serve well when considering the con-
erally, it is defined as following: version of DNN to SNN.
Unlike these works in which significant inputs are en-
V (t) dV (t) coded into earlier spike times, we follow an opposite cod-
I(t) − =C· , (1)
R dt ing guideline based on characteristics of LIF neurons and
where R is a leaky constant, C is the membrane capacitance, we name it Reverse Coding:
V (t) is the membrane potential and I(t) is considered as the The stronger the input stimulus is, the later the corre-
input stimulus. sponding neuron fires a spike.

1320
This is consistent with the characteristics of leaky-IF neu- T_il

rons, where post-synaptic neurons will assert most impres- Ticking Neuron
sive contribution to the recent-spike neurons.
Determining the principle of coding schemes moves one Wtick
step closer to build a conversion method for SNN with
T_il
temporal-coding. Following Reverse Coding, we still need to
tackle the following challenges to build a conversion method
for SNN with temporal-coding: Pre-synaptic
Neurons
• How to maintain coding consistency in the entire net-
work after mapping. Even if the input of the first layer
is encoded according to Reverse Coding, there is no guar- Post-synaptic
antee that the neurons in the subsequent layers will also Neurons
spike exactly in the same way.
• How to eliminate the conversion error from DNNs to
Figure 1: A Layer with a ticking neuron in SNN.
SNNs. A bunch of parameters(threshold, spike frequency,
presentation time etc.) need to be determined in previous
rate-coding based conversion methods and it still lacks inal weights in DNN layers need to be negated. According
strict theoretical evidence to minimize the error caused to equation (4), for those LIF neurons whose correspond-
by parameter selection. ing output value in DNN is negative, they will spike im-
• How to maintain accuracy in SNN with spike-once mediately once the inhibition is released. While for those
neurons. This requires the converted SNN makes full use post-synaptic neurons whose corresponding output in DNN
of the nonlinear characteristics of neuron model and also is positive, their potential will be at a negative point when all
the adjustments in DNN should be well designed. the pre-synaptic neurons fires. After that, the ticking neuron
Next, we will tackle these challenges by introducing the begins to fire, increasing their potential until reach the given
Ticking Neuron mechanism below. threshold. Clearly, the larger the output in DNN, the later the
corresponding neurons in SNN will fire a spike.
Ticking Neuron Mechanism Why ticking neuron is necessary? Because it guarantees
Converting the weights of DNN into that of SNN directly is that all of the neurons can fire spikes. Supposing that if we
the most straightforward way, which also occurred in previ- remove the neuron, those post-synaptic neurons with a nega-
ous rate-coding based conversion methods. However, since tive potential will slowly approach the reset potential. It will
none inhibitory mechanisms are applied, the post-synaptic never fire a spike unless a threshold below reset potential is
neurons would spike at any time once their potentials exceed set for it, which is not consistent with existing SNN findings
the threshold. In such case, it would be difficult to control the and neural science evidence.
spiking moment and spike times along with the appropriate Now all that is required is the strict calculations of spike
threshold. What’s worse, those neurons in SNN, whose cor- timing to achieve a lossless conversion.
responding neuron output in DNN is large, may issue spikes
at an earlier time. It violates the Reverse Coding principle
Mapping Synapse-based Layer
we introduced above. Synapse-based layers refer to those layers with connections
To address the challenges above, we propose an auxil- of exact weight in the DNN, including convolutional layers
iary neuron called ticking neuron and a corresponding spike (CONV), inner-product layers (IP), pooling layers (POOL),
processing mechanism. As it shows in figure 1, the post- etc. These layers are the fundamental layers of DNN and
synaptic neurons will be inhibited to fire once the process also the most computationally intensive layers. We will
begins, reflecting a common refractory period in SNN. The show how to convert these layers to corresponding layers
inhibition will last until time Til (Til must be large enough in SNN applied with mechanisms mentioned above.
to cover all the pre-synaptic spikes, at least it could be the Supposing that the firing threshold of post-synaptic neu-
firing time of the neuron firing the last spike). During that ron is θ, the last firing time of pre-synaptic neurons is Til
time interval, each of the pre-synaptic neurons will only and the current below threshold potential is P . Clearly, the
fire one spike following Reverse Coding. The ticking neuron spike timing To (considering the output layer’s inhibition is
will record the time Til and update its connections’ weight released at time 0) of the post-synaptic neuron is the mini-
ωtick to f (Til ) (which means ωtick is only dependent on Til mum positive integer that obeys the following equation:
and if Til is a pre-determined constant it could also be pre- To
determined). After that, ticking neuron begins to issue spikes
∑ To − t To
ωtick · exp(− ) + P · exp(− ) ≥ θ, (5)
at each time step until each of the post-synaptic neurons has t=0
L L
fired one spike. Those neurons has already fired one spike
will be inhibited to fire(could be considered as being in a where L is the leaky constant described before. Solving this
very long self-inhibitory period). equation, we obtain:
To make the LIF neuron whose corresponding neuron in (−P ) · (1 − exp(−1/L)) + ωtick
DNN outputs a large value fire at a later time point, the orig- To = ⌈L · ln( )⌉. (6)
ωtick − θ · (1 − exp(−1/L))

1321
In order to reduce the number of parameters to be de- Here we show how the lossless mapping of the Max-Pool
termined, the above equation can be simplified by setting layer can be done assuming that the weight of the ticking
θ = 0. Finally, the equation (6) turns out to be: neuron ωtick and the weights −ωk in the SNN max-pool
kernel (the weights in this layer are also negative). Suppos-
(−P ) · (1 − exp(−1/L))
To = ⌈L · ln( + 1)⌉. (7) ing that the size of the pooling kernel is N (usually equals
ωtick KX · KY , which stands for the kernel width in x and y di-
Note that the output positive value of DNN neurons A will rection respectively), firing threshold of post-synaptic neu-
be mapped into a much smaller negative potential P due to rons is 0, and the weights of the mapped kernel are equally
the leaky effect of leaky-IF neurons, which is: distributed.
Til To determine the weights of ωtick and ωk , two extreme sce-
− P = A · exp(− ). (8) narios should be considered. One is that post-synaptic neu-
L rons will accumulate a strong negative potential if all the
Combining equation (7) with equation (8), we will get the pre-synaptic neurons in the kernel fire at time Tm (Tm ≤
connection weight of ticking neuron is as following: Til ). The other is that post-synaptic neurons will accumulate
1 Til a weaker negative potential if there is only one pre-synaptic
ωtick = (1 − exp(− )) · exp(− ). (9) neurons fires at time Tm . It must be guaranteed that in both
L L
extreme cases, the spike timing To of a post-synaptic neuron
This is in line with the constraints we mentioned in the previ- equals Tm .
ous section that ωtick is only dependent on Til . If Til is a pre- Taking the potential firing time of post-synaptic neurons
determined large constant, ωtick becomes a constant too. If in both cases into the equation (7), we obtain the equa-
Til is set as the last firing time of pre-synaptic neurons, then tions 12 and derive the constraints shown as 13, where
ωtick dynamically updated. Also, the temporal-coding func- CL = (1 − exp(−1/L)) is a constant.
tion T (x) will be obtained by combining equation (7)(8)(9):
T (x) = ⌈L · ln(x + 1)⌉. (10) ⎧
ω · (1 + (N − 1) · exp(− TLm ))
⎨ L · ln( k

Correspondingly, a new activate function is needed to be de-
⎪ + 1) > Tm − 1
ωtick /CL
ployed right before these synapse-based layers: N · ω k
⎩ L · ln( · CL + 1) < Tm


{ ωtick
exp( L1 · ⌈L · ln(x + 1)⌉) x ≥ 0, (12)
G(x) = (11)
1 x < 0.
Note that negative input stimulus is avoided in our work
just like other conversion methods did, as negative neurons ⎧ ω
tick CL · N
are neither necessary nor in line with existing neural science. ⎨ ωk > exp( Tm ) − 1


Bias term was explicitly excluded in most of the rate- L
(13)
coding based conversion methods. Rueckauer et al. (Rueck- ωtick CL · (1 + (N − 1) · exp(− TLm ))
<


exp( TmL−1 ) − 1

auer et al. 2017) proposed the bias with an external spike ωk
input of constant rate proportional to the DNN bias. In
temporal-coding SNN, determining the firing timing for bias Notice that ωωtick
k
is only dependent on Tm . If Tm = 1, only
neuron is easy. Here the bias neuron is considered as an in- the left side of the above inequality is needed. Considering
put with a numerical value of 1, thus it will be encoded into that the inequality holds if the left side is smaller than the
a spike timing according to equation (10). The connection right side, the constraints on L when Tm > 1 is obtained:
weights of bias neurons will also be negated after re-training
of DNN. N 1 + (N − 1) · exp(−Tm /L)
Applied the mechanisms and coding functions above, the < (14)
exp(Tm /L − 1) exp((Tm − 1)/L) − 1
synapse-based layers in DNN can be accurately mapped to
spike-time-based ones in SNN. Because ωk , L and N are predetermined, ωtick is only de-
pendent on the presentation time Til of pre-synaptic neu-
Mapping Max-Pool rons. As a result, the lossless mapping of the Max-Pool layer
Max-Pool layer is a commonly used down-sampling layer can be done by selecting parameters according to equations
in DNN, replacing it with Average-Pool will inevitably de- (13)(14).
crease the DNN accuracy. But it is non-trivial to compute However solving such constraints for various N and L
maxima with spiking neurons in SNN. In the rate-coding would be complicated and the existence of solutions needs to
based SNN (Rueckauer et al. 2017), the authors proposed a be determined under many circumstances. Here we present
simple mechanism for spiking max-pooling, in which output a simplified solution for mapping Max-Pool layer under our
units contain gating functions that only let spikes from the mechanism. By setting the firing threshold θ = N , ωk = 1
maximally firing neuron pass, while discarding spikes from and canceling the use of ticking neuron and leaky effect, the
other neurons. Their methods prove to be efficient in rate- firing moment of the output neuron is equal to the spiking
coding based SNN but can not be applied here in temporal- moment of the latest firing neuron in the input kernel. In this
coding SNN. way, Max-Pool is realized in SNN with Reverse Coding.

1322
Network Conversion Algorithm 1 Layer propagation in SNN
In the section, we will introduce the specific conversion Require: pre-synaptic neurons that only spike once, inhibi-
methods and the forward propagation procedure for the con- tion time is Til , a ticking neuron functions after Til with
verted SNN based on the theoretical analysis. weights ωtick connected to all the post-synaptic neurons.
Ensure: post-synaptic neurons only spike once.
DNN Adjustment and Re-training 1: Inhibitory Period. Post-synaptic neurons accumulate
potential according to pre-synaptic spikes. All of them
As shown in figure 2, fetching weights from a fitted DNN
are inhibited to fire until Til .
model consists of following steps. Same as the previous rate-
2: Determine Parameters. Update weights ωtick of the
coding based conversion methods, a minor adjustment on the
ticking neuron according to Til and equation (9)(13).
original DNN network is needed.
3: Active Period. The ticking neuron fires at each time
step, increasing potential of post-synaptic neurons.
CNN CONV ReLU POOL IP ReLU Then the post-synaptic neurons fire according to the LIF
neural model.
training+adjust

CNN G(x)
G(x) winner is the neurons with the largest potential, representing
the final result.

CONV ReLU POOL IP ReLU


Layer i Inhibitory Period Active Period

training

Layer i+1 Inhibitory Period Active Period

SNN SNN­CONV SNN­POOL SNN­IP


Layer i+2 Inhibitory Period Active Period

Figure 2: Conversion procedure.


Figure 3: Propagation pipeline.
Firstly, ReLU activations are deployed right after all the
synapse-based layers, which is a classic structure in clas-
sic network models such as Alexnet, VGG-16. The adjusted Evaluation
network is trained through commonly used techniques for Accuracy
DNN training. We select three representative DNN models as bench-
Secondly, the activation function G(x) (see equation (11)) marks in this paper, including Lenet (Lecun et al. 1998),
derived from the temporal-coding function is put right be- Alexnet (Krizhevsky, Sutskever, and Hinton 2012) and
fore the synapse-based layers. Retraining the network until VGG-16 (Simonyan and Zisserman 2014), see Table 1. The
the network’s loss converges. three DNN models are designed for two different datasets:
Finally, weights of synapse-based layers will be negated Lenet is for MNIST, Alexnet and VGG-16 are for ImageNet.
(except for the output layer, as it produces a float value for Particularly, MNIST consists of 60,000 individual images
final decision). Those layers with no weights such as Max- (28 × 28 grayscale) of handwritten digits (0-9) for training
Pool layer will be converted to a SNN layer with weighted and 10000 digits for testing. ImageNet ILSVRC-2012 in-
connections. The determination of ωk and ωtick have already cludes large images in 1000 classes and is split into three
been provided in the last section. sets: training set (1.3M images), validation set (50K im-
ages), and testing set (100K images). The classification per-
Forward Propagation of SNN formance is evaluated using two measures: the top-1 and
The structure of each layer in the converted SNN is orga- top-5 error. The former reflects the error rate of the clas-
nized as Figure 1. According to the previous theoretical anal- sification and the latter is often used as the criterion for final
ysis, the entire computing procedure is quite different from evaluation.
previous temporal-coding or rate-coding SNNs. So we pro- Comparing our SNN with original DNN, the accuracy
pose a new propagation algorithm of each layer in the con- loss is trivial in Lenet and Alexnet (0.12% to 0.46%/0.5%).
verted SNN, described in Algorithm 1. This illustrates that using the new activation function (11)
Each layer in the converted SNN will follow this algo- does not have a dramatic impact on network accuracy.
rithm to process spikes. The operation of the entire SNN And it’s unexpected that an accuracy increasement of
network can be seen as that inhibition period and active pe- 1.69%/0.83% is obtained in VGG-16. We attribute this phe-
riod alternately appear in a pipeline, as shown in the figure nomenon to the newly proposed activation function and this
3. The active period of current layer is also the inhibitory proves that it might to some extent enhance the robustness
period of the next layer. For the last layer of the network, of the original network and the effectiveness of preventing
we only accumulate the potential of the output neurons. The over-fitting.

1323
Dataset Network depth DNN err. (%) Previous SNN err. (%) Our SNN err. (%)
Lenet [MNIST] 11 0.84 0.56 (Rueckauer et al. 2017) 0.92
Alexnet [ImageNet] 20 42.84/19.67 48.2/23.8 (Hunsberger 2018) 43.3/20.17
VGG-16 [ImageNet] 38 30.82/10.72 50.39/18.37 (Rueckauer et al. 2017) 29.13/9.89

Table 1: Accuracy results.


DN Nmult DN Nadd DN Ntotal
Dataset DN Nmult DN Nadd SN Nmult SN Nadd SN Nmult SN Nadd SN Ntotal
Lenet [MNIST] 2239k 2235k 1920k 3194k 1.17× 0.7× 0.875×
Alexnet [ImageNet] 357M 357M 39M 377M 9.15× 0.95× 1.72×
VGG-16 [ImageNet] 14765M 14757M 1496M 15505M 9.87× 0.95× 1.74×

Table 2: Evaluation on the number of operations.

The experiment results also show that our methods out- tions in SNN is 1.14× that of DNN. The reason is that the
perform the previous SNN architecture especially in large number of pre-synaptic neurons is so small that the increase-
networks. The error rate of our SNN in Lenet is 0.36% larger ment of operations brought by the tikcing neurons becomes
and this is trivial compared with the accuracy gap in com- significant. For larger networks, such as Alexnet and VGG-
plex tasks. When processing complex tasks, our methods 16, the computation benefits are obvious. SNN reduces the
achieves a significant improvement in accuracy of SNN, as number of total operations by 41.9%/42.6%, which proves
Alexnet (4.9%/3.63%) and VGG-16 (21.26%/8.48%). This that our SNN can obtain significant computation reduction
proves that limiting the firing number of each neuron to only in larger networks.
one spike will not reduce the accuracy of the SNN. Applying
the proposed ticking neuron mechanism, the performance Leaky Constant L
of temporal-coding based SNN is much better than those Our conversion is a much simpler and more convenient way
rate-coding based SNNs. All the conversion results on three comparing with other rate-coding based SNNs. The previ-
benchmarks show that temporal-coding based SNN now is ous rate-coding based SNN conversion methods need to de-
able to achieve competitive accuracy comparing with either termine various important parameters, including the maxi-
DNN or rate-coding based SNN. mum spike frequency, spiking thresholds of each layer, etc.
To determine these parameters causes huge labour and un-
Computation Cost stable performance of the converted SNN. However, in our
To evaluate the number of operations in the networks dur- approach, all the parameters are carefully selected accord-
ing the entire forward propagation, We separately evaluate ing to the theoretical analysis before. Among them, the leak
the amount of operations for addition and multiplication in constant L is our primary concern.
layers, including CONV, POOL and IP. In addition to the
existing multiplication and addition operations in vector dot 100
products or potential accumulations, we also put those com-
parison operations into account. Each comparison operation
in the POOL layer is treated as an addition operation and
the leak of LIF neurons at each time step is considered as a 75
multiplication operation.
accuracy(%)

The comparison of operations between DNN and our dataset


● Alexnet(top1)
SNN (L = 5, which is the one with highest accuracy) ● ● ● Alexnet(top5)

is shown in Table 2. Obviously, SNN reduces the multi- 50
Lenet
plication operations on all three benchmarks but also in- VGG−16(top1)
● VGG−16(top5)
creases the number of add operations. The amount of mul-
tiplications in DNN is 1.17×/9.15×/9.87× that of SNN in
Lenet/Alexnet/VGG-16. This is because the number of mul- 25
tiplication in SNN is only related to the total presentation
time and it will not increase with the number of pre-synaptic
neurons, so the reduction of multiplications will be more
significant in larger networks. But for addition operations, 1 2 3 4 5

SNN brings additional operations due to the ticking neuron leaky constant L
with high spike frequency, resulting in that the addition op-
erations in DNN is 0.7×/0.95×/0.95× that in SNN in the Figure 4: Effects of leaky constant L on accuracy.
three benchmarks. Combining the multiplications and addi-
tion operations, we obtain the results on total operations. For As the output of equation (10), the exact firing time will
smaller network, such as Lenet, the total number of opera- increase along with the leaky constant becoming larger.

1324
Therefore, a large L will increase the presentation time Til to complete a task. In fact, only the compute-intensive lay-
of each layer and mapping input stimulus into more concise ers in SNN need to be deployed, those layers require less
spike timing, which brings better performance. Choosing a vector productions such as max-pool could be calculated
small L may narrow the presentation time, resulting in di- in other components of a chip or processing units. What’s
vergent loss in the DNN with the adjusted architecture. The more, some biological mechanisms could be abandoned to
presentation time will also affect the computation overhead accelerate computation.
of the converted SNN as the ticking neuron fires constantly. Note that ticking neuron plays a role to excite post-
We evaluate the value of leaky constant L effect on synaptic neurons biologically, resulting in an interpretable
the accuracy, shown in the figure 4. We find that, for SNN model. However, according to the evaluation in the last
the Lenet, when the L changes from 1 to 5, the accu- section, the deployment of ticking neuron will in some ex-
racy has little change, just from 98.47% to 99.08%. This tent enlarge the computation cost. Thus to accelerate com-
is because in simple tasks as MNIST dataset, the network puting and reducing memory cost, it could be removed when
could stand much more degraded input values. However, deploying SNN layers in a hardware. This also allows the
for the Alexnet and VGG-16, there is a significant change trained DNN weight model to be directly converted to a
with the increasement of L. When L changes from 1 to 5, SNN weight model without the need of negated processing
the accuracy of Alexnet changes from 42.08%/66.96% to and determinations for other parameters such as ωtick and
56.7%79.83% and VGG-16 changes from 0%/0%(training ωk .
failure) to 70.87%/90.11%. The adjusted DNN can not con-
verge when L is small enough as it shows in VGG-16, re- Deployed in SNN hardware
sulting in a collapse of the entire network.
Spike
Timing Potential
SNN­CONV SNN­CONV
1.10
● ●
● ●
1.00 ●
Potential Spike Timing

Encoding
SNN­POOL Function
SNN Ops/CNN Ops

Compound Encoding Function


type
0.60
● add

0.50 mult Figure 6: Deploy compute-intensive layers in SNN hard-


total ware.

Operations DNN SNN


Addition (M − 1) · N (M − 1) · N
0.15
0.10
Multiplication M ·N N · To
0.05
Table 3: Computation cost comparison between DNN and
1 2 3 4 5
SNN deployed in hardware theoretically.
leak

DN NOps For example, as it shows in figure 6, the CONV lay-


Figure 5: Effects of leaky constant L on in Alexnet.
SN NOps ers will be computed in a SNN hardware and those post-
synaptic neurons will accumulate potential and inhibited to
Correspondingly, the effects of leaky constant L on the fire. Those layers between two synapse-based layers includ-
computation cost are evaluated, as shown in Figure 5. The ing activations will be considered as a compound coding
number of addition, multiplication and total operations all function encoding the accumulated potential into spike tim-
increase with a larger L. When L changes from 1 to 5, the ing for the next synapse-based layer and the selected en-
computation reductions change from 48.4% to 41.7%. This coding function is equation (10). After calculation, only the
is consistent with our previous analysis that with the L in- spike timing of the pre-synaptic neurons in the next synapse-
creases, the increasements of presentation time will only in- based layers need to be stored in on-chip memory. This de-
crease the number of operations brought by ticking neuron. ployment method has a larger potential to reduce computing
than the proposed SNN on SNN hardwares though a large
Discussion part of interesting biological mechanisms are discarded.
Considering deploying temporal-coding based SNN in SNN
hardwares, even if the proposed temporal-coding based SNN Conclusion
can already be fully deployed on a SNN chip or platform, it In this paper, we propose a new conversion method along
may be not necessary to completely map the entire network with a novel coding principle called Reverse Coding and a

1325
novel Ticking Neuron mechanism. Our methods can con- Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012.
vert DNNs to temporal-coding SNNs with little accuracy Imagenet classification with deep convolutional neural net-
loss and the converted temporal-coding SNNs are consistent works. In Pereira, F.; Burges, C. J. C.; Bottou, L.; and Wein-
with biological models. Based on our experiments, the pre- berger, K. Q., eds., Advances in Neural Information Process-
sented SNNs could significantly reduce computation cost, ing Systems 25. Curran Associates, Inc. 1097–1105.
and they are potentially alternative cost-saving models when Krizhevsky, A. 2009. Learning multiple layers of features
deployed in SNN hardwares. from tiny images.
Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998.
Acknowledgments Gradient-based learning applied to document recognition.
This work is partially supported by the National Key Re- Proceedings of the IEEE 86(11):2278–2324.
search and Development Program of China (under Grant Lee, J. H.; Delbruck, T.; and Pfeiffer, M. 2016. Training
2017YFA0700902, 2017YFB1003101), the NSF of China deep spiking neural networks using backpropagation. Fron-
(under Grants 61472396, 61432016, 61473275, 61522211, tiers in Neuroscience 10.
61532016, 61521092, 61502446, 61672491, 61602441,
61602446, 61732002, and 61702478), the 973 Program Maass, W. 1997. Network of spiking neurons: the third
of China (under Grant 2015CB358800), National Science generation of neural network models. Transactions of the
and Technology Major Project (2018ZX01031102), the Society for Computer Simulation International 14(4):1659–
Transformation and Transfer of Scientific and Technologi- 1671.
cal Achievements of Chinese Academy of Sciences (KFJ- Perezcarrasco, J. A.; Zhao, B.; Serrano, C.; Acha, B.;
HGZX-013) and Strategic Priority Research Program of Serranogotarredona, T.; Chen, S.; and Linaresbarranco, B.
Chinese Academy of Sciences (XDBS01050200). 2013. Mapping from frame-driven to frame-free event-
driven vision systems by low-rate rate-coding and coin-
References cidence processing. application to feed forward convnets.
IEEE Transactions on Pattern Analysis & Machine Intelli-
Abdel-Hamid, O.; Mohamed, A. R.; Jiang, H.; and Penn, gence 35(11):2706–2719.
G. 2012. Applying convolutional neural networks concepts
to hybrid nn-hmm model for speech recognition. In IEEE Rueckauer, B., and Liu, S. C. 2018. Conversion of analog
International Conference on Acoustics, Speech and Signal to spiking neural networks using sparse temporal coding. In
Processing, 4277–4280. IEEE International Symposium on Circuits and Systems, 1–
5.
Bohte, S. M.; Kok, J. N.; and Han, A. L. P. 2000. Error-
backpropagation in temporally encoded networks of spiking Rueckauer, B.; Lungu, I. A.; Hu, Y.; Pfeiffer, M.; and Liu,
neurons. Neurocomputing 48(1):17–37. S. C. 2017. Conversion of continuous-valued deep networks
to efficient event-driven networks for image classification.
Cao, Y.; Chen, Y.; and Khosla, D. 2015. Spiking Deep Frontiers in Neuroscience 11:682.
Convolutional Neural Networks for Energy-Efficient Object
Recognition. Kluwer Academic Publishers. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.;
Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; and Bernstein,
Diehl, P.; Neil, D.; Binas, J.; Cook, M.; Liu, S.-C.; and Pfeif- M. 2014. Imagenet large scale visual recognition challenge.
fer, M. 2015. Fast-classifying, high-accuracy spiking deep International Journal of Computer Vision 115(3):211–252.
networks through weight and threshold balancing.
Sainath, T. N.; Mohamed, A. R.; Kingsbury, B.; and Ram-
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Deep residual abhadran, B. 2013. Deep convolutional neural networks
learning for image recognition. 770–778. for lvcsr. In IEEE International Conference on Acoustics,
Hinton, G.; Deng, L.; Yu, D.; Dahl, G. E.; Mohamed, Speech and Signal Processing, 8614–8618.
A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; and Severyn, A., and Moschitti, A. 2015. Learning to rank short
Sainath, T. N. 2012. Deep neural networks for acoustic text pairs with convolutional deep neural networks. In The
modeling in speech recognition: The shared views of four re- International ACM SIGIR Conference, 373–382.
search groups. IEEE Signal Processing Magazine 29(6):82–
97. Simonyan, K., and Zisserman, A. 2014. Very deep convolu-
tional networks for large-scale image recognition. Computer
Hodgkin, A. L., and Huxley, A. F. 1990. A quantitative Science.
description of membrane current and its application to con-
duction and excitation in nerve. Bulletin of Mathematical Thorpe, S.; Delorme, A.; and Rullen, R. V. 2001. Spike-
Biology 52(1-2):25–71. based strategies for rapid processing. Neural Networks
14(6):715–725.
Hunsberger, E. 2018. Spiking deep neural networks: En-
gineered and biological approaches to object recognition. Van, R. R., and Thorpe, S. J. 2001. Rate coding versus
UWSpace. temporal order coding: what the retinal ganglion cells tell
the visual cortex. Neural Computation 13(6):1255–1283.
Kim, Y. 2014. Convolutional neural networks for sentence
classification. Eprint Arxiv. Zhang, T.; Zeng, Y.; Zhao, D.; and Shi, M. 2018. A
plasticity-centric approach to train the non-differential spik-
Koch, C., and Segev, I. 1998. Methods in Neuronal Model- ing neural networks.
ing: From Ions to Networks. MIT Press.

1326

You might also like