# Concepts in Computational Neuroscience

By | December 18, 2013

# Computational Neuroscience by Weights and States

## Motivation

One of the simplest ways to model a neural network is to assume some basics

• Each neuron has a state.
• A neuron has input from multiple neurons. The effect is a weighted consideration of the input neurons.
• The network accumulates updates via a stepwise iteration of its components.

## The Neural Network Model

Consider a single neuron N receiving input from neurons that each have an index i. According to our assumptions above, there exists a weight for each input neuron. Additionally, each neuron has a current state si. What will be the effect on N? We combine the inputs and the weights to produce our output O via Hebb's rule.

O = wi * si

Note that we are not dealing with individual values, but rather vectors. Both w and s may be vectors, and in that case, we should use a dot product and compute a weighted sum.

Once O is determined, our neuron N must decide what to do. What will the effect of the weighted input be? This is determined by an activation function. There are various types of activation functions. For instance, one type of activation function is based on a threshold. That is, if the weighted sum exceeds a certain amount, N will fire. There also exist sigmoidal activation functions. In essence, an activation function indicates the state of N given the total summed input.

## Putting it all together - iterating through the network

Consider a network of neurons that are interconnected in complex ways. Let's say we are dealing with 10 neurons and for any given neuron, it has connections coming from one or more neurons. For example, neurons 1, 2, and 4 receive input from neurons 5 and 6, whereas neuron 6 receives input from every neuron except number 9. When we iterate through this network, what will be the final states of our neurons? If we assume an initial state, we can compute as follows.

1. Form a matrix, in our case, it will be 10 x 10, of all the connection weights between the neurons.
2. Multiply the matrix it by the current states of all the neurons, represented as a column vector, to give you the new summed values, also as a column vector, for each neuron.
3. Determine, based on these values, what the new states will be for each neuron. Do this using the activation function. This gives you a column vector of new states.
4. Repeat from step 2 until convergence is met.

# Supervised Learning: Delta Rule/Back Propagation

In the field of learning via computational neuroscience, there is an important distinction between supervised learning and unsupervised learning. Learning that has specific targets and goals is coined supervised learning (e.g., learning to write the alphabet of a language) whereas learning that has no clear objective is called unsupervised learning (e.g., thrown into a new work environment and absorbing lots of random things). Hebb's rule above, in its basic form, is unsupervised learning; there is no clear goal or purpose. We proceed with mathematical models of supervised learning.

In supervised learning, we must pair an input with an output in an effort to establish our targets. Let $\vec{t}$ be our target and $\vec{out}$ be our actual output. As we develop a mapping strategy to match our input to our target, we consider the error at each step to motivate our new weights. Let j be the pre-synaptic input, i the post-synaptic input, and w the weight.

$s_i = \sum\limits_{j=1}^N w_{ij} * s_j$

The goal is to correct weights that most contribute to the error. We define a delta update rule with a learning rate $\alpha$.

$\Delta w_{ij} = (\alpha)(\vec{t_i} - \vec{out_i}) \bullet s_j$
$w_{ij_{new}} = w_{ij} + \Delta w_{ij}$
$SSE = \frac{1}{2}\sum (out_i - t_i)^2$

The sum squared error (SSE) equation is adapted from potential energy equations. The goal is to tend downhill, to lower SSE as we update our weights.

## Case Study: AND Network

A boolean AND operator has the following inputs and outputs: (0 0) = 0; (0 1) = 0; (1 0) = 0; (1 1) = 1. Let's train a network using supervised learning. We begin with an arbitrary weight matrix, (1 -0.6) and we present the four patterns and use $\alpha = 0.1$.

#set up the target and input vectors and the w matrix
i = matrix(c(0,0,0,1,1,0,1,1),nrow=4,ncol=2,byrow=T)
t = matrix(c(0,0,0,1),nrow=4,ncol=1,byrow=T)
w = matrix(c(1,-0.6),nrow=1,ncol=2,byrow=T) #w starts out at 0

#initialize the lrate for delta learning
alpha = 0.1

#initialize the order of examples to iterate through
examples = rep(c(1,2,3,4),10)

#loop through the examples, updating w
errors = rep(NA,length(examples))
count = 1
for(ind in examples) {

#Compute delta W (x%*%y means tensor product of x and y)
out = w%*%i[ind,]
deltaw = (alpha)*((t[ind,]-out)%*%i[ind,])

# Update the weights using delta W
w = w+deltaw

#Compute the error
sum = 0
for(p in seq(1,4)) {
out = w%*%i[p,]

#Compute squared differences
sum = sum + (out-t[p,])^2
}

#Save data for plotting
errors[count] = sum
count = count + 1
}

#check the final w matrix
print(w)
plot(errors,type="l",xaxt="n",ylab="Diff Squared Error",xlab="Trained on Pattern")
axis(side="1",labels=examples,at=seq(1,length(errors))) We can see how the model learns the appropriate weights for this AND input/output. Note that delta rule learning cannot solve more complex expressions like XOR. For these, back propagation, which has never been found in the brain, is needed.

# Circuit Modeling Let's consider this circuit as a neuron in which the resistor represents all the pores of the neuron and the capacitor represents accumulations of ions in the water adjacent to either side of the membrane. The resistor/conductor is associated with conductance $g_{leak}$ or resistance $R_{leak}$. Note that $g = 1/R$. In Electrophysiology, we insert current by piercing through the membrane and injecting current inside the cell (labeled with $I_{inside}$). The current passes out of the cell through the pores (resistor) or through the capacitor. Once reaching the outside of the cell, it will disperse to the rest of the body.

We now consider the circuit physics to model neuron electrophysiology. The current through the resistor is given by Ohm's Law $V = IR \implies I = V/R \implies I = gV$ and the current through the capacitor is given by $Q = CV \implies \frac{dQ}{dT} = I = C\frac{dV}{dt}$.

$I_{in} = C\frac{dV}{dt} + gV$

This is an update equation (differential equation) that we can simply by thinking of $V^{+}$ as future voltage at the next time step.

$I_{in} = C\frac{V^{+}-V}{t} + gV$
$\implies V^{+} = (1-\Delta t\frac{g}{C})V + \Delta t\frac{I_{in}}{C}$ where $t^{+} = \Delta t + t$

We can now graph $V$ (in mV) versus $t$ (in mS) with fixed values for our conductance $g$ and capacitance $C$.

g = 1 #ms/cm^2
C = 1 #uF/cm^2
Iin = 1
N = 5000
dT = 0.001
V = rep(NA,N)
t = rep(NA,N)
V = 0
t = 0
for(i in seq(2,N)) {
V[i] = (1-dT*g/C)*V[i-1]+dT*Iin/C
t[i] = t[i-1] + dT
}
plot(t,V,type="l",xlab="Time (s)",ylab="Voltage (mV)")


One important measure is $\tau_{mem}$, the time required for voltage to reach about 63% of $V_{max}$. $\tau_{mem} = \frac{g}{C} = RC$. We can solve this differential equation analytically to derive $v(t) = 1 - e^{-t}$. We can plot this in R to validate that it is identical the graph above.

t = seq(1,5,by=0.01)
plot(t,1-exp(-t),type="l")


## Alpha function

Post synaptic potentials (PSPs) are often modeled by an alpha function, given by the following expression,

$I_{in} = I_{max}\frac{t}{\tau_{a}}e^\frac{-(t-\tau_{a})}{\tau_{a}}$

whereby $\tau_{a}$ gives the time constant of the alpha function. This model rises quickly for $\tau_{a}$ time then drops in $5\tau_{a}$ time.

alpha = function(t,Taua) {
Tmax=2
return(Imax*(t/Taua)*exp(-(t-Taua)/Taua))

}

t = seq(0,40,by=0.01)
plot(t,alpha(t,2),type="l",col="red",xlab="t",ylab="I_in")
lines(t,alpha(t,4),col="blue")
lines(t,alpha(t,6),col="green")
legend("topright",col=c("red","blue","green"),c("2","4","6"),lwd=2,title="Tau alpha values") Note that this is a model of the post synaptic potential. Often times, people will confuse this model and state that it is an oversimplified model of an action potential. But be careful to note the differences between the post synaptic potential and an action potential, specifically that the depolarization is not immediate and steeply evoked in a true action potential, and the repolarization in this model does not dip down below resting membrane potential, as a true action potential does. EPSPs are signals that arrive through the dendrites and if there are enough, they will sum to produce an action potential.

# Hodgkin Huxley

Previous circuit models above used passive components whereby the conductance remains constant. In a Hodgkin Huxley model, circuits with rheostats are used to represent active components in which conductors have variable conductance in response to various parameters of the model, such as changes in membrane voltage or chemical activation. An action potential is a voltage sensitive channel, and so having an active channel model is more realistic. A circuit in this model consists of charge separation (battery), channels opening/closing (variable conductors), and a switch to dictate if ion movement is depolarizing or hyperpolarizing.

We first establish some electricity concepts. Current is the flow of positive charge, and current is carried by electrons. As such, the direction of charge flow is opposite to the direction of current. Current is measured via voltage clamp (constant voltage). We define inward current to the membrane from the outside as positive. There is a higher concentration of Na+ outside the cell, and a higher concentration of K+ inside the cell. For example, higher Na+ flux means current inwards. Since Cl- potential is concentrated near resting membrane potential (RMP), chlorid flux can be either inward or outward. Since it has a negative charge, a decrease in inward flux of chloride is considered an inward current.

In this model, we have a leak segment on the left (passive component) and a variable segment on the right (active component). In this model, the conductors (resistors) with arrows indicate rheostats, which are variable conductors. Note the flow of current; in the sodium branch, the battery is flipped and thus negative charge flows up the circuit. We measure voltage in the cytoplasm using electrophysiology. Note that $E$ is simply electric potential, and is measured in volts. As above, $g$ represents conductance, which is inversely related to resistance. $g_{leak}$ is a fixed membrane conductor, whereas $g_{K}$ and $g_{Na}$ are variable (i.e., rheostats).

We now consider the model using laws of electricity.

$0 = I_c + I_{Na} + I_K + I_{leak}$ (Kirchhoff's Law)
$\implies -I_c = I_{Na} + I_K + I_{leak}$
$\implies -C\frac{\Delta V}{\delta t} = g_{leak}(V_{memb}-E_{leak}) + g_{Na}(V_{memb}-E_{Na}) + g_{K}(V_{memb}-E_{K})$

Note that $C>0$ means depolarization, since the inside becomes more positive, given by $I_c=C\frac{\Delta V}{\Delta t}$. At steady state, $\frac{\Delta V}{\Delta t} = 0$ results in the Goldman Hodgkin Katz equation:

$V_{memb} = \frac{g_{leak}E_{leak}+g_{Na}E_{Na}+g_{K}E_{K}}{g_{leak}+g_{Na}+g_{K}}$

Where does the battery come from? Let's consider the sodium battery, but the process in similar for potassium.

1. Sodium is pumped from the inside to outside (active), resulting in higher concentration of sodium outside the cell.
2. As a result of increased sodium outside the cell, it wants to flow inside due to diffusion. But as long as the channels remain closed, sodium cannot diffuse.  Once the sodium channels open and sodium flows, charge separation across the membrane forms.
3. The unmatched sodium ions inside the membrane will stay near the membrane to stay close to the negative charge buildup on the outside due to loss of sodium through the membrane. The positives on the inside of the membrane and the negatives on the outside creates an electric field.

To further understand the Hodgkin Huxley model, we must consider the stages of an action potential. Firstly, depolarization by Na+ channels is a positive feedback loop that results in more and more depolarization. The Na+ channels are being switched open more and more slowly, though. The action potential reaches its peak, and Na+ channels switch off, at which point negative feedback kicks in and potassium conductance is turned on.

## Particle Duality

The Hodgkin Huxley model predicted correctly that ion channels turn on/off independently. They called these channels particles, and assigned the value 0 for channels that were off (blocking) and 1 to those that were on (nonblocking). Potassium has 4 $n$ channels, hence a value of $n^4$. Sodium has 3 $m$ particles and 1 $h$ particle, giving a value of $m^3h$. $m$ particles turn on gradually from 0 to 1 as Na+ channels open, during which $h$ goes from 1 to 0, but more slowly. At this same rate that $h$ approaches 0, $n$ particles go from 0 to 1 (giving the K+ more influence over the current).

Particles are considered to give conductance as a percent of max conductance of Na+ and K+ respectively. That is, $g_{Na} = \bar{g}_{Na}m^3h$ and $g_{K} = \bar{g}_{K}n^4$ whereby $\bar{g}$ gives maximum $g$, or conductance. At resting potential (negative V), $m$ is near 0 and $h$ is near 1. The steady state of each particle, $x_{\infty}$, describes the state the particle is in after an infinite amount of time. A particle approaches this state at a rate proportional to its membrane constant, $\tau$. For instance, $\tau_m$ determines how fast $m$ will approach $m_{\infty}$. Note that $\tau$ values are dependent on temperature.

## Hodgkin Huxley Equations

We have four state variables ($V$, $m$, $h$, $n$) each with their own differential equations, meaning the state of a Hodgkin Huxley neuron is a 4 dimensional variable, unlike a single dimension that we assigned neuron states above in Hebb learning. The four differential equations interact. An action potential can be thought of as all four state variables approaching their steady states.

Note that throughout the following equations, $x = m$, $h$, or $n$. The only state variable that can be measured is V; the rest are based on populations of switches that turn Na+ and K+ channels on and off.
$\tau_x \frac{dx}{dt} = x_{\infty} - x$
$x_{\infty}(V) = \frac{\alpha_x(V)}{\alpha_x(V)+\beta_x(V)}$
$\tau_x(V) = \frac{1}{\alpha_x(V)+\beta_x(V)}$
$\alpha_m(V)=\frac{0.1(V+40)}{1-e^{-0.1(V+40)}}$
$\beta_m(V) = 4e^{-0.0556(V+65)}$
$\alpha_h(V)=0.07e^{-0.05(V+65)}$
$\beta_h(V)=\frac{1}{1+e^{-0.1(V+35)}}$
$\alpha_n(V)=\frac{-0.01(V+55)}{e^{-0.01(V+55)}-1}$
$\beta_n(V)=0.125e^{-0.0125V+65}$
$CV_{tot} = \bar{g}_{Na}m^3h(V-E_{Na})+\bar{g}_Kn^4(V-E_K)+\bar{g}_{leak}(V-E_{leak})$ (from Hodgkin Huxley circuit)

We can put these differential equations into R and plot the action potential.

library(simecol)
## Hodkin-Huxley model
HH = odeModel(
main = function(time, init, parms) {
with(as.list(c(init, parms)),{

am = function(v) 0.1*(v+40)/(1-exp(-(v+40)/10))
bm = function(v) 4*exp(-(v+65)/18)
ah = function(v) 0.07*exp(-(v+65)/20)
bh = function(v) 1/(1+exp(-(v+35)/10))
an = function(v) 0.01*(v+55)/(1-exp(-(v+55)/10))
bn = function(v) 0.125*exp(-(v+65)/80)

dv = (I - gna*h*(v-Ena)*m^3-gk*(v-Ek)*n^4-gl*(v-El))/C
dm = am(v)*(1-m)-bm(v)*m
dh = ah(v)*(1-h)-bh(v)*h
dn = an(v)*(1-n)-bn(v)*n

return(list(c(dv, dm, dh, dn)))
})
},
## Set parameters
parms = c(Ena=50, Ek=-77, El=-54.4, gna=120, gk=36, gl=0.3, C=1, I=2.5),
## Set integrations times
times = c(from=0, to=40, by = 0.25),
## Set initial state
init = c(v=-65, m=0.052, h=0.596, n=0.317),
solver = "lsoda"
)
HH = sim(HH)
plot(HH) If we inject additional current, the spiking will remain intact for longer time. We can plot the action potential with variable external stimulus (current):

#NOTE: this code must be executed after the above code
times(HH)["to"] = 100
## Stimulus
I = c(2, 5, 5.97, 5.975, 6.2, 6.5)
sims = do.call("rbind",
lapply(I, function(i){
parms(HH)["I"] = i
HH = sim(HH)
cbind(I=paste("I =", i), out(HH))
}))
## Plot the various experiments with lattice
library(latticeExtra)
asTheEconomist(
xyplot(v ~ time | factor(I), data=sims, type="l",
as.table=TRUE,
main="Hodgkin-Huxely model with varying external stimulus"),
xlab="time (ms)", ylab="mV") # Compartment Modeling

In compartment modeling, multiple neurons or smaller segments of individual neurons are split up and modeled as separate Hodgkin Huxley model. Each compartment of a neuron is modeled as an RC or parallel conductance circuit. Individual compartments are connected by resistors. Inputs that are farther away from the spike-generating zone will have less influence on output than those that are closer, so this must be a predictive aspect of compartment models. A single compartment, in theory, must be equipotential. That is, the potential at all points within that compartment should be equivalent to the potential at all other points. To achieve such a result, tiny compartments are often utilized. However, this dramatically increases the computation time of these models. Normally, each compartment responds to a single section of membrane. The smaller those sections, the more realistic the model is. An individual compartment is equipotential but will typically differ in voltage from neighboring compartments. In this way, different parts of the neuron can be doing different things at the same time, which is a more realistic model. The difference in voltage from one end of the compartmental model to the other is given by Ohm's law: $V_+ - V_- = I_l R_l$.

Each compartment, as a Hodgkin Huxley circuit, will have capacitance and leak capacitance, but neurophysiological data would likely suggest the inclusion of additional conductances in some compartments. These can be either excitatory or inhibitive. Individual compartments in a multicompartment model are connected to their neighboring compartments via resistors. The resistors represent the cytoplasmic axial resistance between the center of two connected compartments. Once the entire compartment is drawn, it can be simulated via the same steps as the basic Hodgkin Huxley model: (1) Kirchoff's law is used to sum the voltage, (2) Ohm's and capacitance laws are used to determine passive current, (3) Hodgkin Huxley equations and parameters are employed to understand the particle interactions, and (4) numerical integration to solve the differential equations.

Various content adapted from Lytton. (2002). From Computer to Brain.

Share