Spike Neural Network With Temporal Coding
Spike Neural Network With Temporal Coding
Spike Neural Network With Temporal Coding
1319
putationally efficient than rate-coding based SNN. Recently Considering that the inputs are instantaneous currents that
Rueckauer and Liu (Rueckauer and Liu 2018) proposed a are generated from discrete-time neural spikes in an SNN
conversion method using sparse temporal coding, but it is model, the equation (1) turns to:
only suitable for shallow neural network models.
In this paper, we put forward a novel conversion method V (t) dV (t) ∑
+ = ωi · Ii (t), (2)
called TDSNN, transferring DNNs to temporal-coding SNNs L dt i
with only trivial accuracy loss. We also propose a new en-
coding scheme Reverse Coding along with a novel Tick- where ωi denote the synapse strength of the connections
ing Neuron mechanism. We address three challenges during in SNN, L is the leak-related constant. If there exists no
the process of converting as follows. Firstly, we maintain continuous spikes, the neuron potential will decay as time
the coding consistency in the entire network after mapping. goes by. The LIF model processes the time information by
Secondly, we eliminate the conversion error from DNNs to adding a leaky term, reflecting the diffusion of ions that oc-
SNNs. Thirdly, we maintain accuracy in SNN with spike- curs through the membrane when some equilibrium is not
once neurons. The details will be discussed later in the Re- reached in the cell. If no spike occurs during the time inter-
verse Coding part. According to our evaluation, we achieve val from t1 to t2 , the potential of the LIF neuron will change
42% total operations reduction on average in large net- as following:
works comparing with DNNs with no more than 0.5% ac- t2 − t1
curacy loss. The experimental results show that our method V (t2 ) = V (t1 ) · exp(− ). (3)
L
is an efficient and high-performance conversion method for
temporal-coding based SNN. And if no no spike occurs in this neuron, the final potential
accumulation P at time Til will be:
Background ∑ Til − ti
In this section, we will introduce two common neural mod- P (Til ) = ωi · exp(− ). (4)
i
L
els. One is integrate-and-fire(IF) model, the other is leaky
integrate-and-fire (LIF) model. This implies that the pre-synaptic neuron’s potential con-
tribution to post-synaptic neurons is positively correlated
Integrate-and-fire (IF) model with the firing time of pre-synaptic spike. This nonlinear
We introduce integrate-and-fire(IF) model at first, which is characteristic has never been put good use in previous con-
widely used in previous rate-coding based conversion ap- version methods. We make full use of these features to build
proaches. In IF models, each neuron will accumulate po- a conversion method for temporal-coding SNN.
tential from input current and fire a spike when its poten-
tial reaches the threshold. Although successful conversions Theoretical Analysis
have been made from DNN neurons with ReLU activations
to IF neurons, the process of the transformation still ex- In this section, we present details of our conversion method.
ists unsatisfactory problems. For example, it is difficult to We first introduce the Reverse Coding guideline and ticking
accurately determine thresholds in these conversion meth- neuron mechanism, respectively. Then, we analyze theoreti-
ods. Although Diehl et al. (Diehl et al. 2015) has proposed cally on all the requirements of conversion method, includ-
data-based and model-based threshold determination meth- ing temporal-coding function and important parameters.
ods, the converted SNN still suffers unstable accuracy loss
comparing with the re-trained DNN model. What’s worse,
Reverse Coding
IF neurons implement no time-dependent memory. Once a There has been massive number of influential works in the
neuron receives a below-threshold action potential at some practice of encoding input into a single spike before our
time, it will keep retaining that voltage until it fires. Obvi- work. For example, Van et al. (Van and Thorpe 2001) pro-
ously, it is not in line with the neural behavior in neural sci- posed a rank order based encoding scheme, in which the in-
ence. puts are encoded into specific spiking order. The potential
contribution of later-spike neurons will be punished with a
Leaky Integrate-and-fire (LIF) model delay factor so that the ones fire earlier will contribute a
To better model neural behavior in the spike-threshold major part of potential accumulation to post-synaptic neu-
model, researchers in neural science field proposed a sim- rons. Combined with LIF neurons, this coding method has
plified model—Leaky Integrate-and-fire (LIF) model (Koch been proven to achieve a Gaussian-difference-filtering ef-
and Segev 1998), which is derived from the famous fect, which is helpful for extracting features. But these
Hodgkin–Huxley model (Hodgkin and Huxley 1990). Gen- guidelines failed to serve well when considering the con-
erally, it is defined as following: version of DNN to SNN.
Unlike these works in which significant inputs are en-
V (t) dV (t) coded into earlier spike times, we follow an opposite cod-
I(t) − =C· , (1)
R dt ing guideline based on characteristics of LIF neurons and
where R is a leaky constant, C is the membrane capacitance, we name it Reverse Coding:
V (t) is the membrane potential and I(t) is considered as the The stronger the input stimulus is, the later the corre-
input stimulus. sponding neuron fires a spike.
1320
This is consistent with the characteristics of leaky-IF neu- T_il
rons, where post-synaptic neurons will assert most impres- Ticking Neuron
sive contribution to the recent-spike neurons.
Determining the principle of coding schemes moves one Wtick
step closer to build a conversion method for SNN with
T_il
temporal-coding. Following Reverse Coding, we still need to
tackle the following challenges to build a conversion method
for SNN with temporal-coding: Pre-synaptic
Neurons
• How to maintain coding consistency in the entire net-
work after mapping. Even if the input of the first layer
is encoded according to Reverse Coding, there is no guar- Post-synaptic
antee that the neurons in the subsequent layers will also Neurons
spike exactly in the same way.
• How to eliminate the conversion error from DNNs to
Figure 1: A Layer with a ticking neuron in SNN.
SNNs. A bunch of parameters(threshold, spike frequency,
presentation time etc.) need to be determined in previous
rate-coding based conversion methods and it still lacks inal weights in DNN layers need to be negated. According
strict theoretical evidence to minimize the error caused to equation (4), for those LIF neurons whose correspond-
by parameter selection. ing output value in DNN is negative, they will spike im-
• How to maintain accuracy in SNN with spike-once mediately once the inhibition is released. While for those
neurons. This requires the converted SNN makes full use post-synaptic neurons whose corresponding output in DNN
of the nonlinear characteristics of neuron model and also is positive, their potential will be at a negative point when all
the adjustments in DNN should be well designed. the pre-synaptic neurons fires. After that, the ticking neuron
Next, we will tackle these challenges by introducing the begins to fire, increasing their potential until reach the given
Ticking Neuron mechanism below. threshold. Clearly, the larger the output in DNN, the later the
corresponding neurons in SNN will fire a spike.
Ticking Neuron Mechanism Why ticking neuron is necessary? Because it guarantees
Converting the weights of DNN into that of SNN directly is that all of the neurons can fire spikes. Supposing that if we
the most straightforward way, which also occurred in previ- remove the neuron, those post-synaptic neurons with a nega-
ous rate-coding based conversion methods. However, since tive potential will slowly approach the reset potential. It will
none inhibitory mechanisms are applied, the post-synaptic never fire a spike unless a threshold below reset potential is
neurons would spike at any time once their potentials exceed set for it, which is not consistent with existing SNN findings
the threshold. In such case, it would be difficult to control the and neural science evidence.
spiking moment and spike times along with the appropriate Now all that is required is the strict calculations of spike
threshold. What’s worse, those neurons in SNN, whose cor- timing to achieve a lossless conversion.
responding neuron output in DNN is large, may issue spikes
at an earlier time. It violates the Reverse Coding principle
Mapping Synapse-based Layer
we introduced above. Synapse-based layers refer to those layers with connections
To address the challenges above, we propose an auxil- of exact weight in the DNN, including convolutional layers
iary neuron called ticking neuron and a corresponding spike (CONV), inner-product layers (IP), pooling layers (POOL),
processing mechanism. As it shows in figure 1, the post- etc. These layers are the fundamental layers of DNN and
synaptic neurons will be inhibited to fire once the process also the most computationally intensive layers. We will
begins, reflecting a common refractory period in SNN. The show how to convert these layers to corresponding layers
inhibition will last until time Til (Til must be large enough in SNN applied with mechanisms mentioned above.
to cover all the pre-synaptic spikes, at least it could be the Supposing that the firing threshold of post-synaptic neu-
firing time of the neuron firing the last spike). During that ron is θ, the last firing time of pre-synaptic neurons is Til
time interval, each of the pre-synaptic neurons will only and the current below threshold potential is P . Clearly, the
fire one spike following Reverse Coding. The ticking neuron spike timing To (considering the output layer’s inhibition is
will record the time Til and update its connections’ weight released at time 0) of the post-synaptic neuron is the mini-
ωtick to f (Til ) (which means ωtick is only dependent on Til mum positive integer that obeys the following equation:
and if Til is a pre-determined constant it could also be pre- To
determined). After that, ticking neuron begins to issue spikes
∑ To − t To
ωtick · exp(− ) + P · exp(− ) ≥ θ, (5)
at each time step until each of the post-synaptic neurons has t=0
L L
fired one spike. Those neurons has already fired one spike
will be inhibited to fire(could be considered as being in a where L is the leaky constant described before. Solving this
very long self-inhibitory period). equation, we obtain:
To make the LIF neuron whose corresponding neuron in (−P ) · (1 − exp(−1/L)) + ωtick
DNN outputs a large value fire at a later time point, the orig- To = ⌈L · ln( )⌉. (6)
ωtick − θ · (1 − exp(−1/L))
1321
In order to reduce the number of parameters to be de- Here we show how the lossless mapping of the Max-Pool
termined, the above equation can be simplified by setting layer can be done assuming that the weight of the ticking
θ = 0. Finally, the equation (6) turns out to be: neuron ωtick and the weights −ωk in the SNN max-pool
kernel (the weights in this layer are also negative). Suppos-
(−P ) · (1 − exp(−1/L))
To = ⌈L · ln( + 1)⌉. (7) ing that the size of the pooling kernel is N (usually equals
ωtick KX · KY , which stands for the kernel width in x and y di-
Note that the output positive value of DNN neurons A will rection respectively), firing threshold of post-synaptic neu-
be mapped into a much smaller negative potential P due to rons is 0, and the weights of the mapped kernel are equally
the leaky effect of leaky-IF neurons, which is: distributed.
Til To determine the weights of ωtick and ωk , two extreme sce-
− P = A · exp(− ). (8) narios should be considered. One is that post-synaptic neu-
L rons will accumulate a strong negative potential if all the
Combining equation (7) with equation (8), we will get the pre-synaptic neurons in the kernel fire at time Tm (Tm ≤
connection weight of ticking neuron is as following: Til ). The other is that post-synaptic neurons will accumulate
1 Til a weaker negative potential if there is only one pre-synaptic
ωtick = (1 − exp(− )) · exp(− ). (9) neurons fires at time Tm . It must be guaranteed that in both
L L
extreme cases, the spike timing To of a post-synaptic neuron
This is in line with the constraints we mentioned in the previ- equals Tm .
ous section that ωtick is only dependent on Til . If Til is a pre- Taking the potential firing time of post-synaptic neurons
determined large constant, ωtick becomes a constant too. If in both cases into the equation (7), we obtain the equa-
Til is set as the last firing time of pre-synaptic neurons, then tions 12 and derive the constraints shown as 13, where
ωtick dynamically updated. Also, the temporal-coding func- CL = (1 − exp(−1/L)) is a constant.
tion T (x) will be obtained by combining equation (7)(8)(9):
T (x) = ⌈L · ln(x + 1)⌉. (10) ⎧
ω · (1 + (N − 1) · exp(− TLm ))
⎨ L · ln( k
⎪
Correspondingly, a new activate function is needed to be de-
⎪ + 1) > Tm − 1
ωtick /CL
ployed right before these synapse-based layers: N · ω k
⎩ L · ln( · CL + 1) < Tm
⎪
⎪
{ ωtick
exp( L1 · ⌈L · ln(x + 1)⌉) x ≥ 0, (12)
G(x) = (11)
1 x < 0.
Note that negative input stimulus is avoided in our work
just like other conversion methods did, as negative neurons ⎧ ω
tick CL · N
are neither necessary nor in line with existing neural science. ⎨ ωk > exp( Tm ) − 1
⎪
⎪
Bias term was explicitly excluded in most of the rate- L
(13)
coding based conversion methods. Rueckauer et al. (Rueck- ωtick CL · (1 + (N − 1) · exp(− TLm ))
<
⎪
⎪
exp( TmL−1 ) − 1
⎩
auer et al. 2017) proposed the bias with an external spike ωk
input of constant rate proportional to the DNN bias. In
temporal-coding SNN, determining the firing timing for bias Notice that ωωtick
k
is only dependent on Tm . If Tm = 1, only
neuron is easy. Here the bias neuron is considered as an in- the left side of the above inequality is needed. Considering
put with a numerical value of 1, thus it will be encoded into that the inequality holds if the left side is smaller than the
a spike timing according to equation (10). The connection right side, the constraints on L when Tm > 1 is obtained:
weights of bias neurons will also be negated after re-training
of DNN. N 1 + (N − 1) · exp(−Tm /L)
Applied the mechanisms and coding functions above, the < (14)
exp(Tm /L − 1) exp((Tm − 1)/L) − 1
synapse-based layers in DNN can be accurately mapped to
spike-time-based ones in SNN. Because ωk , L and N are predetermined, ωtick is only de-
pendent on the presentation time Til of pre-synaptic neu-
Mapping Max-Pool rons. As a result, the lossless mapping of the Max-Pool layer
Max-Pool layer is a commonly used down-sampling layer can be done by selecting parameters according to equations
in DNN, replacing it with Average-Pool will inevitably de- (13)(14).
crease the DNN accuracy. But it is non-trivial to compute However solving such constraints for various N and L
maxima with spiking neurons in SNN. In the rate-coding would be complicated and the existence of solutions needs to
based SNN (Rueckauer et al. 2017), the authors proposed a be determined under many circumstances. Here we present
simple mechanism for spiking max-pooling, in which output a simplified solution for mapping Max-Pool layer under our
units contain gating functions that only let spikes from the mechanism. By setting the firing threshold θ = N , ωk = 1
maximally firing neuron pass, while discarding spikes from and canceling the use of ticking neuron and leaky effect, the
other neurons. Their methods prove to be efficient in rate- firing moment of the output neuron is equal to the spiking
coding based SNN but can not be applied here in temporal- moment of the latest firing neuron in the input kernel. In this
coding SNN. way, Max-Pool is realized in SNN with Reverse Coding.
1322
Network Conversion Algorithm 1 Layer propagation in SNN
In the section, we will introduce the specific conversion Require: pre-synaptic neurons that only spike once, inhibi-
methods and the forward propagation procedure for the con- tion time is Til , a ticking neuron functions after Til with
verted SNN based on the theoretical analysis. weights ωtick connected to all the post-synaptic neurons.
Ensure: post-synaptic neurons only spike once.
DNN Adjustment and Re-training 1: Inhibitory Period. Post-synaptic neurons accumulate
potential according to pre-synaptic spikes. All of them
As shown in figure 2, fetching weights from a fitted DNN
are inhibited to fire until Til .
model consists of following steps. Same as the previous rate-
2: Determine Parameters. Update weights ωtick of the
coding based conversion methods, a minor adjustment on the
ticking neuron according to Til and equation (9)(13).
original DNN network is needed.
3: Active Period. The ticking neuron fires at each time
step, increasing potential of post-synaptic neurons.
CNN CONV ReLU POOL IP ReLU Then the post-synaptic neurons fire according to the LIF
neural model.
training+adjust
CNN G(x)
G(x) winner is the neurons with the largest potential, representing
the final result.
training
1323
Dataset Network depth DNN err. (%) Previous SNN err. (%) Our SNN err. (%)
Lenet [MNIST] 11 0.84 0.56 (Rueckauer et al. 2017) 0.92
Alexnet [ImageNet] 20 42.84/19.67 48.2/23.8 (Hunsberger 2018) 43.3/20.17
VGG-16 [ImageNet] 38 30.82/10.72 50.39/18.37 (Rueckauer et al. 2017) 29.13/9.89
The experiment results also show that our methods out- tions in SNN is 1.14× that of DNN. The reason is that the
perform the previous SNN architecture especially in large number of pre-synaptic neurons is so small that the increase-
networks. The error rate of our SNN in Lenet is 0.36% larger ment of operations brought by the tikcing neurons becomes
and this is trivial compared with the accuracy gap in com- significant. For larger networks, such as Alexnet and VGG-
plex tasks. When processing complex tasks, our methods 16, the computation benefits are obvious. SNN reduces the
achieves a significant improvement in accuracy of SNN, as number of total operations by 41.9%/42.6%, which proves
Alexnet (4.9%/3.63%) and VGG-16 (21.26%/8.48%). This that our SNN can obtain significant computation reduction
proves that limiting the firing number of each neuron to only in larger networks.
one spike will not reduce the accuracy of the SNN. Applying
the proposed ticking neuron mechanism, the performance Leaky Constant L
of temporal-coding based SNN is much better than those Our conversion is a much simpler and more convenient way
rate-coding based SNNs. All the conversion results on three comparing with other rate-coding based SNNs. The previ-
benchmarks show that temporal-coding based SNN now is ous rate-coding based SNN conversion methods need to de-
able to achieve competitive accuracy comparing with either termine various important parameters, including the maxi-
DNN or rate-coding based SNN. mum spike frequency, spiking thresholds of each layer, etc.
To determine these parameters causes huge labour and un-
Computation Cost stable performance of the converted SNN. However, in our
To evaluate the number of operations in the networks dur- approach, all the parameters are carefully selected accord-
ing the entire forward propagation, We separately evaluate ing to the theoretical analysis before. Among them, the leak
the amount of operations for addition and multiplication in constant L is our primary concern.
layers, including CONV, POOL and IP. In addition to the
existing multiplication and addition operations in vector dot 100
products or potential accumulations, we also put those com-
parison operations into account. Each comparison operation
in the POOL layer is treated as an addition operation and
the leak of LIF neurons at each time step is considered as a 75
multiplication operation.
accuracy(%)
SNN brings additional operations due to the ticking neuron leaky constant L
with high spike frequency, resulting in that the addition op-
erations in DNN is 0.7×/0.95×/0.95× that in SNN in the Figure 4: Effects of leaky constant L on accuracy.
three benchmarks. Combining the multiplications and addi-
tion operations, we obtain the results on total operations. For As the output of equation (10), the exact firing time will
smaller network, such as Lenet, the total number of opera- increase along with the leaky constant becoming larger.
1324
Therefore, a large L will increase the presentation time Til to complete a task. In fact, only the compute-intensive lay-
of each layer and mapping input stimulus into more concise ers in SNN need to be deployed, those layers require less
spike timing, which brings better performance. Choosing a vector productions such as max-pool could be calculated
small L may narrow the presentation time, resulting in di- in other components of a chip or processing units. What’s
vergent loss in the DNN with the adjusted architecture. The more, some biological mechanisms could be abandoned to
presentation time will also affect the computation overhead accelerate computation.
of the converted SNN as the ticking neuron fires constantly. Note that ticking neuron plays a role to excite post-
We evaluate the value of leaky constant L effect on synaptic neurons biologically, resulting in an interpretable
the accuracy, shown in the figure 4. We find that, for SNN model. However, according to the evaluation in the last
the Lenet, when the L changes from 1 to 5, the accu- section, the deployment of ticking neuron will in some ex-
racy has little change, just from 98.47% to 99.08%. This tent enlarge the computation cost. Thus to accelerate com-
is because in simple tasks as MNIST dataset, the network puting and reducing memory cost, it could be removed when
could stand much more degraded input values. However, deploying SNN layers in a hardware. This also allows the
for the Alexnet and VGG-16, there is a significant change trained DNN weight model to be directly converted to a
with the increasement of L. When L changes from 1 to 5, SNN weight model without the need of negated processing
the accuracy of Alexnet changes from 42.08%/66.96% to and determinations for other parameters such as ωtick and
56.7%79.83% and VGG-16 changes from 0%/0%(training ωk .
failure) to 70.87%/90.11%. The adjusted DNN can not con-
verge when L is small enough as it shows in VGG-16, re- Deployed in SNN hardware
sulting in a collapse of the entire network.
Spike
Timing Potential
SNNCONV SNNCONV
1.10
● ●
● ●
1.00 ●
Potential Spike Timing
Encoding
SNNPOOL Function
SNN Ops/CNN Ops
1325
novel Ticking Neuron mechanism. Our methods can con- Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012.
vert DNNs to temporal-coding SNNs with little accuracy Imagenet classification with deep convolutional neural net-
loss and the converted temporal-coding SNNs are consistent works. In Pereira, F.; Burges, C. J. C.; Bottou, L.; and Wein-
with biological models. Based on our experiments, the pre- berger, K. Q., eds., Advances in Neural Information Process-
sented SNNs could significantly reduce computation cost, ing Systems 25. Curran Associates, Inc. 1097–1105.
and they are potentially alternative cost-saving models when Krizhevsky, A. 2009. Learning multiple layers of features
deployed in SNN hardwares. from tiny images.
Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998.
Acknowledgments Gradient-based learning applied to document recognition.
This work is partially supported by the National Key Re- Proceedings of the IEEE 86(11):2278–2324.
search and Development Program of China (under Grant Lee, J. H.; Delbruck, T.; and Pfeiffer, M. 2016. Training
2017YFA0700902, 2017YFB1003101), the NSF of China deep spiking neural networks using backpropagation. Fron-
(under Grants 61472396, 61432016, 61473275, 61522211, tiers in Neuroscience 10.
61532016, 61521092, 61502446, 61672491, 61602441,
61602446, 61732002, and 61702478), the 973 Program Maass, W. 1997. Network of spiking neurons: the third
of China (under Grant 2015CB358800), National Science generation of neural network models. Transactions of the
and Technology Major Project (2018ZX01031102), the Society for Computer Simulation International 14(4):1659–
Transformation and Transfer of Scientific and Technologi- 1671.
cal Achievements of Chinese Academy of Sciences (KFJ- Perezcarrasco, J. A.; Zhao, B.; Serrano, C.; Acha, B.;
HGZX-013) and Strategic Priority Research Program of Serranogotarredona, T.; Chen, S.; and Linaresbarranco, B.
Chinese Academy of Sciences (XDBS01050200). 2013. Mapping from frame-driven to frame-free event-
driven vision systems by low-rate rate-coding and coin-
References cidence processing. application to feed forward convnets.
IEEE Transactions on Pattern Analysis & Machine Intelli-
Abdel-Hamid, O.; Mohamed, A. R.; Jiang, H.; and Penn, gence 35(11):2706–2719.
G. 2012. Applying convolutional neural networks concepts
to hybrid nn-hmm model for speech recognition. In IEEE Rueckauer, B., and Liu, S. C. 2018. Conversion of analog
International Conference on Acoustics, Speech and Signal to spiking neural networks using sparse temporal coding. In
Processing, 4277–4280. IEEE International Symposium on Circuits and Systems, 1–
5.
Bohte, S. M.; Kok, J. N.; and Han, A. L. P. 2000. Error-
backpropagation in temporally encoded networks of spiking Rueckauer, B.; Lungu, I. A.; Hu, Y.; Pfeiffer, M.; and Liu,
neurons. Neurocomputing 48(1):17–37. S. C. 2017. Conversion of continuous-valued deep networks
to efficient event-driven networks for image classification.
Cao, Y.; Chen, Y.; and Khosla, D. 2015. Spiking Deep Frontiers in Neuroscience 11:682.
Convolutional Neural Networks for Energy-Efficient Object
Recognition. Kluwer Academic Publishers. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.;
Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; and Bernstein,
Diehl, P.; Neil, D.; Binas, J.; Cook, M.; Liu, S.-C.; and Pfeif- M. 2014. Imagenet large scale visual recognition challenge.
fer, M. 2015. Fast-classifying, high-accuracy spiking deep International Journal of Computer Vision 115(3):211–252.
networks through weight and threshold balancing.
Sainath, T. N.; Mohamed, A. R.; Kingsbury, B.; and Ram-
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Deep residual abhadran, B. 2013. Deep convolutional neural networks
learning for image recognition. 770–778. for lvcsr. In IEEE International Conference on Acoustics,
Hinton, G.; Deng, L.; Yu, D.; Dahl, G. E.; Mohamed, Speech and Signal Processing, 8614–8618.
A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; and Severyn, A., and Moschitti, A. 2015. Learning to rank short
Sainath, T. N. 2012. Deep neural networks for acoustic text pairs with convolutional deep neural networks. In The
modeling in speech recognition: The shared views of four re- International ACM SIGIR Conference, 373–382.
search groups. IEEE Signal Processing Magazine 29(6):82–
97. Simonyan, K., and Zisserman, A. 2014. Very deep convolu-
tional networks for large-scale image recognition. Computer
Hodgkin, A. L., and Huxley, A. F. 1990. A quantitative Science.
description of membrane current and its application to con-
duction and excitation in nerve. Bulletin of Mathematical Thorpe, S.; Delorme, A.; and Rullen, R. V. 2001. Spike-
Biology 52(1-2):25–71. based strategies for rapid processing. Neural Networks
14(6):715–725.
Hunsberger, E. 2018. Spiking deep neural networks: En-
gineered and biological approaches to object recognition. Van, R. R., and Thorpe, S. J. 2001. Rate coding versus
UWSpace. temporal order coding: what the retinal ganglion cells tell
the visual cortex. Neural Computation 13(6):1255–1283.
Kim, Y. 2014. Convolutional neural networks for sentence
classification. Eprint Arxiv. Zhang, T.; Zeng, Y.; Zhao, D.; and Shi, M. 2018. A
plasticity-centric approach to train the non-differential spik-
Koch, C., and Segev, I. 1998. Methods in Neuronal Model- ing neural networks.
ing: From Ions to Networks. MIT Press.
1326