# Video visualization#

In this tutorial we’ll see how to inject a video visualizer into a simulation.

Preamble:

from brian2 import *
import matplotlib.pyplot as plt

from cleo import *

utilities.style_plots_for_docs()

# numpy faster than cython for lightweight example
prefs.codegen.target = 'numpy'
# for reproducibility
np.random.seed(1866)

c_exc = 'xkcd:tomato'
c_inh = 'xkcd:cerulean blue'

INFO       Cache size for target 'cython': 1933664869 MB.
You can call clear_cache('cython') to delete all files from the cache or manually delete files in the '/home/kyle/.cython/brian_extensions' directory. [brian2]


## Set up the simulation#

### Network#

We’ll use excitatory and inhibitory populations of exponential integrate-and-fire neurons.

n_e = 400
n_i = n_e // 4
def eif(n, name):
ng = NeuronGroup(
n,
"""
dv/dt = (-(v - E_L) + Delta_T*exp((v-theta)/Delta_T) + Rm*I) / tau_m : volt
I : amp
""",
threshold="v>30*mV",
reset="v=-55*mV",
namespace={
"tau_m": 20 * ms,
"Rm": 500 * Mohm,
"theta": -50 * mV,
"Delta_T": 2 * mV,
"E_L": -70*mV,
},
name=name,
)
ng.v = -70 * mV
return ng

exc = eif(n_e, "exc")
inh = eif(n_i, "inh")
W = 250
p_S = 0.3
S_ei = Synapses(exc, inh, on_pre="v_post+=W*mV/n_e")
S_ei.connect(p=p_S)
S_ie = Synapses(inh, exc, on_pre="v_post-=W*mV/n_i")
S_ie.connect(p=p_S)
S_ee = Synapses(exc, exc, on_pre="v_post+=W*mV/n_e")
S_ee.connect(condition='abs(i-j)<=20')

mon_e = SpikeMonitor(exc)
mon_i = SpikeMonitor(inh)

net = Network(exc, inh, S_ei, S_ie, S_ee, mon_e, mon_i)


### Coordinates and optogenetics#

Here we configure the coordinates and optogenetic stimulation. For more details, see the “Optogenetic stimulation” tutorial. Note that we save the arguments used in the plotting function for reuse later on when generating the video.

from cleo.coords import assign_coords_uniform_cylinder
from cleo.viz import plot

r = 1
assign_coords_uniform_cylinder(
exc, xyz_start=(0, 0, 0.3), xyz_end=(0, 0, 0.4), radius=r
)
assign_coords_uniform_cylinder(
inh, xyz_start=(0, 0, 0.3), xyz_end=(0, 0, 0.4), radius=r
)

from cleo.opto import (
OptogeneticIntervention,
FourStateModel,
ChR2_four_state,
default_blue,
)

opto = OptogeneticIntervention(
name="opto",
opsin_model=FourStateModel(ChR2_four_state),
light_model_params=default_blue,
max_Irr0_mW_per_mm2=30,
)

plotargs = {
"colors": [c_exc, c_inh],
"zlim": (0, 0.6),
"scatterargs": {"s": 20},  # to adjust neuron marker size
}

plot(
exc,
inh,
**plotargs,
devices=[(opto, {"n_points": 2e4})],
)

(<Figure size 432x288 with 1 Axes>,
<Axes3DSubplot:xlabel='x (mm)', ylabel='y (mm)'>)


### Simulator, optogenetics injection#

Here we create the simulator and inject the OptogeneticIntervention.

sim = CLSimulator(net)
sim.inject_stimulator(opto, exc, Iopto_var_name='I')


### Processor#

And we set up open-loop optogenetic stimulation:

from cleo.ioproc import LatencyIOProcessor

opto.update(0)
stim_vals = []
stim_t = []
class OpenLoopOpto(LatencyIOProcessor):
def process(self, state_dict, time_ms):
# random walk stimulation
opto_intensity = opto.value + np.random.randn()*.5
if opto_intensity < 0:
opto_intensity = 0
# save values for plotting
stim_vals.append(opto_intensity)
stim_t.append(time_ms)
return ({"opto": opto_intensity}, time_ms)

sim.set_io_processor(OpenLoopOpto(sample_period_ms=1))


## Inject VideoVisualizer#

A VideoVisualizer is an InterfaceDevice like recorders and stimulators and needs to be injected in order to properly interact with the Brian network. Keep in mind the following:

• It must be injected after all other devices for the devices='all' argument to work as expected.

• Similarly to recording and stimulation, you must specify the target neuron groups (to display, in this case) on injection

• The dt argument makes a huge difference on the amount of time it takes to generate the video. You may want to keep this high while experimenting and only lower it when you are ready to generate a high-quality video since the process is so slow.

from cleo.viz import VideoVisualizer

vv = VideoVisualizer(dt=1 * ms, devices="all")
sim.inject_device(vv, exc, inh)


## Run simulation and visualize#

Here we display a quick plot before generating the video:

T = 100
sim.run(T * ms)

fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True)
sptexc = mon_e.spike_trains()
ax1.eventplot([t/ms for t in sptexc.values()], lineoffsets=list(sptexc.keys()), color=c_exc)
ax1.set(ylabel="neuron index", title="exc spiking")
sptinh = mon_i.spike_trains()
ax2.eventplot([t/ms for t in sptinh.values()], lineoffsets=list(sptinh.keys()), color=c_inh)
ax2.set(ylabel="neuron index", title="inh spiking")
ax3.plot(stim_t, stim_vals, c="#72b5f2")
ax3.set(ylabel=r"$Irr_0$ (mm/mW$^2$)", title="optogenetic stimulus", xlabel="time (ms)");

INFO       No numerical integration method specified for group 'exc', using method 'euler' (took 0.01s, trying other methods took 0.04s). [brian2.stateupdaters.base.method_choice]

INFO       No numerical integration method specified for group 'inh', using method 'euler' (took 0.00s, trying other methods took 0.02s). [brian2.stateupdaters.base.method_choice]

WARNING    'T' is an internal variable of group 'synapses_opto_exc', but also exists in the run namespace with the value 100. The internal variable will be used. [brian2.groups.group.Group.resolve.resolution_conflict]


The VideoVisualizer stores the data it needs during the simulation, but hasn’t yet produced any visual output. We first use the generate_Animation(), plugging in the arguments we used for the original plot.

Also, we set the max_Irr0_mW_per_mm2_viz attribute of the optogenetic intervention. This effectively scales how bright the light appears in the visualization. That is, a high maximum irradiance makes the stimulus values small in comparison and produces a faint light, while a low ceiling makes the values relatively large and produces a bright light in the resulting video.

opto.max_Irr0_mW_per_mm2_viz = max(stim_vals)
ani = vv.generate_Animation(plotargs, slowdown_factor=10)


The generate_Animation() function returns a matplotlib FuncAnimation object, which you can then use however you want. You will probably want to save a video.

Note that at this point the video still hasn’t been rendered; that happens when you try and save or visualize the animation. This step takes a while if your temporal resolution is high, so we suggest you do this only after your experiment is finalized and after you’ve experimented with low framerate videos to finalize video parameters.

Here we embed the video using HTML so you can see the output:

from matplotlib import rc
rc('animation', html='jshtml')

ani