Skip to content

Blog

Applied electricity

Course notes of Applied electricity by Lecturer 林冠中.

Course information

  • Lecturer: 林冠中 (calculus365(at)yahoo.com.tw)
  • Time: Fri. ABCD
  • Location: EE1-201 Lab
  • Office hour (please make a reservation by email):
  • Mon. 5 pm
  • Tue. evening
  • Fri. after class

TL;DR

  • SI units for electrical circuits:
  • time(t): second
  • current(i): Ampere
  • voltage(v): Volt
  • resistance(r): Ohm
  • power(p): Watt

  • SI prefixes

  • Directionality matters. Negative sign = opposite direction
  • Minus to plus: voltage gain; plus to minus voltage drop
  • Ohm's Law: V = IR
  • Kirchhoff's circuit laws
  • KCL : \(Σi_{in} = Σi_{out}\) (on a node)
  • KVL : \(ΣV_{drop} = ΣV_{supply}\) (around a loop)
  • Power: \(P = IV\)

Current

  • Flow of positive charge versus time: \(i(t) = \frac{dq}{dt}\)
  • 1 Ampere = 1 Coulomb per second

Voltage

  • Change in energy (in Joules) from moving 1 Coulomb of charge
  • 1 Volt = 1 Joule change per Coulomb: \(V = \frac{Q}{C}\)
  • Change in electrical potential (\(φA - φB\))
  • Ground: V = 0 (manually set)
  • \(V_{ab} = V_{a} - V_{b}\)
  • Power supplier (source): pointing from minus (-) to plus (+)
  • Power receiver (load): pointing from plus (+) to minus (-)

Dependent sources

Find where the depending parameter is and note the units. Wikipedia

Power balance

Rule of thumb: supply = load. Beware directionality of both current and voltage across a device.

Resistive circuits, Nodal and mesh analysis

  • Conductance G = 1/R, Unit: Siemens (S)
  • Short circuit: V = 0, R = 0
  • Open circuit: I = 0, G = 0
  • Passive component: R >= 0
  • Parallel circuit: same voltage
  • Serial circuit: same current

Nodal analysis

  • Use Kirchhoff's current laws (current in = current out)
  • Grounding (set V = 0) on one of the node.
  • Super-node (Voltage source): direct voltage difference between nodes, reducing the need to find currents

Loop (mesh) analysis

  • Use Kirchhoff's voltage laws (voltage supplied = voltage consumed)
  • Note that currents add up in common sides of the loops.
  • Super-mesh (Current source): direct current inference in the loop.

Series / parallel circuits

  • Serial: one common point. \(R_s = R_1 + R_2\)
  • Parallel: two common points. \(R_p = \frac{R_1R_2}{R_1 + R_2}\)
  • Analysis: combining resistors bottom-up.
  • Voltage divider: series resistors
  • Current divider: parallel resistors

Resistor tolerance

  • Last colored ring on the resistor.
  • Need to design some room for the components according to the tolerance (min and max values)

Y-Δ transformation

https://en.wikipedia.org/wiki/Y-%CE%94_transform

  • Δ -> Y: denominator = sum of the three; numerator = product of the twos next to the node
  • Y -> Δ: denominator = the opposite one; numerator = sum of products of two
  • Electric bridge balance: products of the opposite sides are the same
  • central current = 0 => equivalent to open circuit

Question

What is the difference between node and loop analysis?

The principle of superposition

  • Effects of multiple sources could be added (superposition) individually.
  • Remove voltage source = short circuit
  • Remove current source = open circuit

Thevenin and Norton equivalent circuits

Thévenin's theorem

  • Simplifying circuit in a black box from the principle of superposition
  • Find equivalent resistance (\(R_{TH}\)) after source removal.
  • Find equivalent open circuit voltage (source) for Thevenin's theorem
  • Or, find equivalent short circuit current (source) for Norton's theorem
For dependent sources
  • Pure dependent sources cannot self-start: \(V_{TH}\) = 0
  • Finding \(R_{TH}\) requires a probe source: \(R_{TH} = {V_p}/{I_p}\)
  • I_p : short circuit current with a probe source
  • Like one would do in a an circuit experiment

Maximum power transfer

When \(R_{Load} = R_{TH}\), the power is maximum $P = V{TH}^2 / (4R) $

Norton's theorem

  • \(R_{TH}\) the same as Thevenin
  • \(I_{SC}\) : short circuit current instead of open circuit voltage
  • \(I_{SC} = V_{OC} / R_{TH}\)

Capacitors and Capacitance

  • \(i = C \frac{dV}{dt}\)
  • Smooth voltage change
  • At t=0 and uncharged: short-circuit
  • DC steady-state: open-circuit
  • Energy stored: \(0.5CV^2\)
  • Series / Parallel: opposite to resistors

Inductors and Inductance

  • \(v = L \frac{di}{dt}\)
  • Smooth current change
  • At t=0 and no mag. flux: open-circuit
  • DC steady-state: short-circuit
  • Energy stored: \(0.5Li^2\)
  • Series / Parallel: the same as resistors

AC steady-state analysis

  • Periodic signal: \(x(t) = x(t + nT)\)
  • Sinusoidal waveform: \(x(t) = Acos(\omega t + \phi)\)
  • \(\omega = 2\pi f\)
  • \(f = 1/T\)
  • \(2\pi\) rad = 360 degrees

RMS (Root mean square), effective value

  • Peak = \(\sqrt{2}\) RMS value for sinusoidal current and voltage.

Phase lead / lag

  • Leads by 45 degrees: \(x(t) = Acos(\omega t + 45^o)\)
  • Lags by 45 degrees: \(x(t) = Acos(\omega t - 45^o)\)
  • Pure capacitor + AC circuit: current leads voltage by 90 degrees
  • Pure inductor + AC circuit: current lags voltage by 90 degrees

Complex algebra

  • Euler's formula: \(e^{j\theta} = cos(\theta) + sin(\theta)\)
  • Frequency term (\(e^{j\omega t}\)) is usually omitted in favor of angle notation.
  • Multiplication: angle addition; division: angle subtraction for waveforms of the same freq.

Impedance

  • Generalization to resistance in the complex domain
  • The same way in calculations as that in the case of resistance
  • Admittance: Generalization to conductance (reciprocal of impedance)
  • Inductor: \(Z_{L} = j\omega L\)
  • Capacitor: \(Z_{C} = 1/(j\omega C)\)
  • Impedance is frequency-dependent. Higher freq: higher impedance from inductors; lower freq: higher impedance from capacitors

Filter

  • Proved from impedance analysis
  • Frequency response: transfer function = gain function
  • Low pass filter: the RC circuit
  • Bode plot: x: input frequency (log scale), y: response (amplitude)

RC, RL, and RLC circuits

  • First, convert it to the equivalent circuit (Thevenin) for further analysis
  • Time constants may be different in charging / discharging due to different circuits

RC transients

  • Voltage is continuous, while current is not.
  • For uncharged capacitor, initial voltage across the capacitor is zero (i.e. short circuit)
  • when charging, it approaches applied voltage. The steady-state is open circuit.
  • Discharging: positive voltage and negative current.
  • Time scale \(\tau_{C} = RC\)
  • Charging transient: \(v_{C} = E - i_{C}R\), \(i_{C} = \frac{E}{R}e^{-t/\tau_{C}}\)
  • Discharging transient: \(v_{C} = V_0e^{-t/\tau_{C}}\), \(i_{C} = -v_{C}/ R\)
  • \(t/\tau_{C} > 5\): > 99% completed charging / discharging

RL transient

  • Current is continuous, while voltage is not.
  • For uncharged inductor, initial current is zero (open circuit); then approaches terminal current upon charging. The steady-state is short circuit.
  • Discharging: positive current and negative voltage (Lenz's law).
  • Time scale \(\tau_{L} = L/R\)
  • Charging transient: \(v_{L} = Ee^{-t/\tau_{L}}\), \(i_{L} = (E - v_{L}) / R\)
  • Discharging transient: \(v_{L} = -I_0Re^{-t/\tau_{L}}\), \(i_{L} = I_0e^{-t/\tau_{L}}\)

RLC transients

  • Solving series RLC circuit by KVL: \(V_R + V_L + V_C = E\)
  • \(\frac{d^2I}{dt^2} + \frac{R}{L}\frac{dI}{dt} + \frac{I}{LC} = 0\), due to E is constant (DC)
  • Let \(I = e^{\lambda t}\), \(\lambda = \frac{-R}{2L} \plusmn \sqrt{(\frac{R}{2L})^2 - \frac{1}{LC}}\)
  • Resonant frequency: \(\omega_0^2 = \frac{1}{LC}\)
  • Solving parallel RLC circuit by KCL: the same resonant frequency: \(\omega_0^2 = \frac{1}{LC}\)

Damping

  • Overdamping: \((\frac{R}{2L})^2 - \frac{1}{LC} > 0\)
  • Critical damping: \((\frac{R}{2L})^2 - \frac{1}{LC} = 0\) (decaying faster than overdamping)
  • Underdamping: \((\frac{R}{2L})^2 - \frac{1}{LC} < 0\), oscillation (+)

Quality factor

Bandwidth

Wikipedia

  • At resonance: \(Z_{C}\) and \(Z_{L}\) cancel each other out
  • \(Q = f_{r} / BW\)
  • BW: Bandwidth
  • Series RLC: \(Q = \frac{1}{R}\sqrt{\frac{L}{C}}\)
  • Parallel RLC: \(Q = \frac{R}{1}\sqrt{\frac{C}{L}}\)

Steady state power analysis

  • Note and specify the difference between peak values (\(I_{M}\), \(V_{M}\)) and the effective (RMS) values.
  • $ p = \frac{VM IM}{2}(cos(\thetav - \thetai) + cos(2\omega t + \thetav + \thetai))$
  • Twice the frequency: \(2\omega t\) compared to current and voltage
  • Average power: $VM IM cos(\thetav - \thetai)/2 = V{rms} I cos(\thetav - \thetai) = P_{app} \cdot pf $
    • Apparent power: \(P_{app} = V_M I_M / 2 = V_{rms} I_{rms}\)
    • Power factor: \(pf = cos(\theta_v - \theta_i)\). The phase difference between voltage and current
    • Purely resistive: \(p = V_M I_M / 2 = V_{rms} I_{rms}\). pf = 1
    • Purely capacitive / inductive: \(p = pf = 0\). Does not absorb power on average.
    • For average power, one could calculate the resistive part only.

Maximum power transfer

When \(Z_{L} = Z_{TH}^*\) , Im(ΣZ)= 0,
\(P_{L,max} = 0.5 * \frac{\lvert V_{OC} \rvert\ ^2 }{4 R_{TH}}\) (Since \(P = 0.5 V_{M}I_{M}\))

Power factor and complex power

  • \(p = V_M I_M cos(\theta_v - \theta_i)/2 = P_{app} \cdot pf\). Unit:VA
  • Phase difference = 0 (purely resistive), pf = 1
  • Phase difference = -90 (purely capacitive) or 90 (purely inductive), pf = 0
Active power vs reactive power

\(S = P_{app} cos(\theta_v - \theta_i) + jP_{app} * sin(\theta_v - \theta_i)\)
* Former: active power (P); latter: reactive power (Q)
* \(\lvert S \rvert = \sqrt{P^2 + Q^2} = P_{app} = V_{rms} I_{rms}\)
* \(P = \lvert S \rvert \cdot pf\)
* For capacitive circuit: Q < 0; inductive circuit: Q > 0

Safety considerations

  • 100 mA to the heart: Ventricular tachycardia and could be fatal
  • Grounding: increase safety by shunt the current away from the user in case of fault
  • Ground fault interrupter(GFI):
    GFI
  • No fault: In current = out current, do nothing
  • Fault: If current is not the same as out, it induces current in the sensing coil and breaks the circuit.
  • Accidental grounding: new path for currents, new hazard.

Magnetically coupled circuits

Mutual inductance

  • Open circuit \(v_2 = L_{21}\frac{di_1}{dt}\)
  • Two current sources: self inductance plus mutual inductance
  • \(v_1 = L_1\frac{di_1}{dt} + L_{12}\frac{di_2}{dt}\)
  • \(v_2 = L_{21}\frac{di_1}{dt} + L_{2}\frac{di_2}{dt}\)
  • Beware the dot (current direction of the input and output): turn them into standard circuit
  • The linear model states \(L_{21} = L_{12} = M\)
  • Mutual inductance in series inductors: \(L_{eq} = L_1 + L_2 \pm 2M\)
  • Mutual inductance in parallel inductors: \(L_{eq} = \frac{L_1L_2 - M^2}{L_1 + L_2 \mp 2M}\)

Energy analysis

  • \(w = 0.5L_1I_1^2 + 0.5L_2I_2^2 \pm MI_1I_2\)
  • \(M \le \sqrt{L_1L_2}\), the geometric mean of L1 and L2
  • \(k = \frac{M}{\sqrt{L_1L_2}}\), coupling coefficient (0 to 1)

Transformers

  • Iron core, air core, composite core
  • Ideal transformers: no energy loss (Pin = Pout)
  • \(\frac{V_1}{V_2} = \frac{N_1}{N_2}\)
  • \(\frac{i_1}{i_2} = \frac{N_2}{N_1}\)
  • \(\frac{Z_1}{Z_2} = (\frac{N_1}{N_2})^2\)

  • Analysis of simple transformer circuits (PhD qualification exam)

  • Application: AC -> transformer -> rectifier -> filter -> regulator -> DC
  • Practical transformers
  • Leakage of magnetic flux
  • Winding resistance: copper loss
  • Core loss: eddy current, hysteresis
  • efficiency: \(\eta = \frac{P_{out}}{P_{in}}\)

Frequency response

  • Resistive circuit: freq-independent \(|Z_R|\) = const, θ = 0
  • Inductive \(|Z_L|\varpropto f\), θ = 90
  • Capacitive \(|Z_C|\varpropto 1/f\), θ = -90

Series RLC

\(Z_{eq} = R + j\omega L + \frac{1}{j\omega C}\)

\(|Z_{eq}| = \frac{\sqrt{(\omega RC)^2 + (1-\omega^2LC)^2}}{\omega C}\)

  • Minimal \(|Z_{eq}|\) when \(\omega = \omega_0 = \frac{1}{\sqrt{LC}}\) (resonant frequency) and \(Im(Z_{eq}) = 0\)

Bode plot

Wikipedia

  • x-axis: freq (log(f) )
  • y-axis: magnitude (20*log(M), in dB) / phase (in degrees)
  • dB is for power amplification / attenuation
  • dBm = \(10log\frac{p}{1mW}\)

Multistage system

  • Amplitude: product of all systems
  • dB: sum of all dB gains

Network transfer function

\[ H(s) = \frac{X_{out}(s)}{Y_{in}(s)} \]

Thevenin equivalence theorem for finding the gain.

Bandwidth

Dependent on reactive elements (usually RC circuits, inductors are more difficult to handle)

Cutoff frequency: -3dB (0.707x) voltage magnitude (half power)

Quality factor and effective bandwidth

Series RLC: \(Q = \frac{\omega_0 L}{R} = \frac{1}{R}\sqrt{\frac{L}{C}}\)

Bandwidth (BW) = \(\omega_0\) / Q = \(\omega_{hi} - \omega_{lo}\)

\(\omega_{hi} \omega_{lo} = \omega_0^2\)

Poles and Zeros

Let \(s = j\omega\) (Laplace transform)

For series RLC:
\(Z_{eq} = R + sL + \frac{1}{sC} = \frac{s^2LC + sRC + 1}{sC}\)

\(H(s) = K_0 \frac{(s-z_1)(s-z_2)...}{(s-p_1)(s-p_2)...}\)

  • \(K_0\) : DC term
  • zeros: H(s) = 0
  • poles: H(s) diverges

Filters

  • Low-pass <-> high pass by an RC circuit
  • Band-pass <-> band reject (notch filter) by an RLC circuit or a combination of a low-pass and a high-pass filter

OP-Amp

OP-Amp on Wikipedia

Ideal OP-Amp

Circuit analysis in Ideal OP-Amp

  • \(V_{-} \approx V_{+}\)
  • \(i_{-+} \approx 0\)
  • Make sure \(V_{out}\) is in the range of supplied voltages.

The rest is Ohm's law and circuit analysis.

More OP-Amp circuits

Multiple input voltages

Principle of superposition. One voltage source at a time.

With energy-storing devices
  • Differentiator
  • Integrator
  • Antoniou Inductance Simulation Circuit

\(L = C_4 R_1 R_3 R_5 / R_2\)

Semiconductors

Materials

  • Group IV: Si, Ge
  • Group III + V: GaN, GaAs(P)

Why semi-conductivity

  • Band gap energy difference \(E_{g}\) = \(E_{c}\) - \(E_{v}\)
  • Insulators: > 5 eV
  • Semiconductors: smaller gap, a small amount of electrons escape from valence band to the conduction band
  • Conductors (metal, graphite): overlap (no gap)
  • Direct (III+V) vs indirect (IV) band gaps
  • Direct: could emit photons (LED, photo detector)
  • Indirect: emit a phonon in the crystal
  • The tetrahedral covalent bond crystalline structure for extrinsic (doped) semiconductors

Carriers

  • Electrons (e-) in the conduction band as well as the vacancies in the valence band (holes, h+)
  • For intrinsic semiconductors, motility factor \(\mu\): GaN (GaAs) >Ge > Si, parallel with conductivity
  • Enriched by doping (increase both \(\mu\) and conductivity): making extrinsic semiconductors
  • Doping group V elements (donor impurities): electrons are major carriers (N-type)
  • Doping group III elements (acceptor impurities): holes are major carriers (P-type)
  • N-type semiconductors have higher \(\mu\) than P-type since electrons have lower effective mass than holes. Thus, N-type is better for high-freq applications. But P-type has dual role, being both resistors and semiconductor switches.

P-N junctions and diodes

P-N junction

Depletion zone

  • Diffusion of major carriers generate an electric field across the boundary
  • A zone with little carriers, high resistance
  • Process a barrier voltage

Applying voltage

  • Forward bias: shrinking depletion zone, high conductivity
  • Reverse bias: widening depletion zone, very low conductivity (essentially open circuit until breakdown)

Shockley equation

\(I_{D} = I_{S} (exp(V/V_{T}) - 1)\),

where \(V_{T} = \frac{kT}{q} = \frac{RT}{F} = 26mV\) (Thermal voltage)

  • Forward bias: shrinking depletion zone
  • Reverse bias: widening depletion zone, very low conductivity (essentially open circuit until breakdown)

Three representations of diodes

Assuming there are internal resistance (\(R_D\)), threshold voltage (\(V_D\)).
* For \(V \le V_D\), open circuit.
* For \(V \geq V_D\), equivalent to a reverse voltage source of \(V_D\).

  1. Ideal diodes: \(R_D = 0\), \(V_D = 0\). Forward bias: short circuit. Reverse bias: open circuit.
  2. With barrier voltage (Si = 0.6~0.7 V; Ge = 0.2~0.3V): \(R_D = 0\), \(V_D \neq 0\).
  3. Practical diodes: \(R_D \neq 0\), \(V_D \neq 0\)

Diode circuits

Transform diodes into equivalent components.

Source

Rectifiers

Only half wave rectification was covered.

Limiters (cutters)

Filters

When the load resistance is infinite (open circuit): peak detector

When the load resistance is finite:
The more discharging time scale ( \(\tau = RC\) ), the less ripple voltage. ( \(V_r \approx \frac{V_p}{fCR}\) when \(V_r \ll V_p\) )

Voltage regulator using Zener diodes

Zener diode on Wikipedia

  • First unplug the Zener diode and solve the voltage across it.
  • Normally operates in reverse bias. ( \(V_{Z}\) = 4-6 V )
  • When applied voltage > \(V_{Z}\): Acts as a voltage source of \(V_{Z}\). Open circuit otherwise.
  • When in forward bias: similar to regular diodes ( \(V_{Z}\) = 0.7 V )
  • When breakdown \(V_{Z}\) is independent of loading resistance.

BJT

Bipolar junction transistor on Wikipedia

Current control devices.

Symbol

NPN BJT (more common)

PNP BJT (less common nowadays)

  • B: Base
  • C: Collector
  • E: Emitter

Math

  • \(I_C = \beta I_B\), \(\beta \gg 1\) (typically 80-180)
  • \(I_E = I_C + I_B = (1 + \beta) I_B\)
  • BArrier voltage: \(V_{BE} \approx 0.7\) Volt for Si BJT. 1.1 V for GaN BJT.
  • \(I_{Csat} \approx \frac{V_{CC}-0.2}{R_C + R_E}\)
  • \(\beta I_B = I_C \leq I_{Csat}\)

The rest is regular circuit analysis (KCL, KVL).

One could use the fact that \(I_B\) is very small (\(\mu A\)) compared to other currents (\(mA\)).

MOSFET

MOSFET on wikipedia

Voltage control devices.

  • Gate voltage \(V_{GS}\) is greater than threshold (\(V_t\)): low resistance, (ideally) short circuit.
  • Otherwise, high resistance, (ideally) open circuit.

Computational Cognitive Neuroscience

Course Notes of Computational Cognitive Neuroscience by Prof. 鄭士康.

Course Information

Central Questions

  • Could machine perceive and think like humans?
  • Turing test
  • Stimuli -> acquire -> store -> transform (process, emotion) -> recall -> response (actions)

Cognitive Psychology

  • Assumption: materialism: mind = brain function
  • Later became Cognitive Neuroscience
  • Models: Box and arrow -> Computational (mechanistic) vs Statistical model
  • Neuronal network connections

Artificial intelligence

  • Reductionism
  • Search space of parameters
  • General problem solver
  • Expert systems (symbol and rule-based)
  • Symbol processing ≢ intelligence (Chinese room argument)
  • Does the machine really know semantics from the symbols and rules?
  • Mimicking biological neural networks (H&H neuron model) -> spiking neuron network & Hebbian learning
  • Perceptron : Limitations by Minsky (unable to solve XOR problem) -> 1st winter of AI
  • Multilayer and backpropagation: connectionism
  • Parallel distributed processing (1986): actually neural networks (a taboo by then)
  • Convolution neuronal networks (CNNs)
  • Computer vision
  • Similar to image processing in the visual cortex
  • Decomposition of features: stripes, angles, colors, etc.
  • Does intelligence emerge from complex networks?
  • Dynamicism
  • embodied approach
  • Feedback system
  • Systems of non-linear DEs
  • Cybernetics: control system for ML (system identification)
  • Bayesian approach : pure statistics regardless of underlying mechanism

Biological plausibility

  • Low = little similarity to biological counterpart
  • e.g. expert systems
  • CNN: medium BP
  • SpiNNator and Nengo: high BP

Levels (scales) of nervous system

  • Focused on mesoscopic scale (neurons and synapses) in this course

Building a brain with math models

Why?

Feymann: What I cannot create, I do not understand.

  1. Understanding brain functions -> health (AD, PD, HD)
  2. AI modeling and applications

3D brain structure

www.g2conline.org

The scale of brain models

  • Neuron
  • Small clusters of neurons
  • Large scale connections (connectomes)

Neuron biology

  • dendrite
  • soma
  • axon and myelin sheath

Hodgkin and Huxley model (1952)

  • Math model from recordings of squid giant axon
  • Action potential
  • Biophysically accurate, but harder to do numerical analysis
  • Chance and Design by Alan Hodgkin
Derived models
  • Simpler models with action potentials and multiple inputs
  • Leaky, Integrated and Fire model (LIF model)
  • LEBRA: single equation for a neuron, no spatial components
  • Compartment model of dendrite, soma, and axon.
  • Delay effect (+)
  • Discretization of the partial differential equation (PDE) model
  • Could Delayed Differential Equations (DDEs) used in this context?
  • Data (from fMIR, DTI, ...) rich and theory poor
  • Large-scale models (connectomes)
  • Neuromorphic hardware

NEF (Neural Engineering Network) & SPA (Semantic Pointer Architecture)

Semantic Pointer
  • Semantics important for both symbolic and NN models
  • Example : autoencoder
  • Dimension reduction layer by layer (raw data -> symbols)
  • Similar to visual cortex and associative areas
  • Reverse the network to adjust the weights
  • Loss = predicted - input

  • Spaun model: Autoencoders to process multiple sensory inputs as well as motor functions and decision making (transformation, working memory, reward, selection).

  • Ewert's Question: How is neural activity coordinated, learned and controlled?

  • Capturing semantics
  • Encoding syntactic structures
  • Controlling information flow?
  • Memory, learning?
Embodied semantics
  • Neural firing patterns
  • High dimensional vector symbolic architectures
Working memory
  • 7 +/- 2 items, with highest recall for the 1st and the last item
Spike-Timing-Dependent plasticity (STDP)
  • non-linear mapping for learning through synapses

Spiking models

  • Keywords: spike firing rate, tuning curves, *Poisson models
  • Adrian's frog leg test: loading induced spikes in the sciatic nerve
  • Stereotyped signals = spikes
  • Firing rate is a function to stimuli
  • Fatigue (adaptation) over time
Neural responses
  • Raster plot: dot = one spike. x: time; y: neuron id
  • Firing rate histogram: x: time; y: # of spikes
  • Neural signal response: with Dirac delta function (signal processing?)

$$
\rho(t) = \Sigma{n=1}^N\delta(t - ti)
$$

  • Individual spikes -> Firing rates (in Hz) with a windows (moving average)

  • Similar to pulse density modulation (PDM)

Tuning curve
  • x: stimuli trait; y: response
  • e.g. visual cortical neuron response to line orientation
  • Present in both sensory and motor cortices
Poisson process for spike firing
  • Poisson process: a random process with constant rate (or average waiting time).
  • The probability P with n events fired in a period T given a firing rate r could be expressed by:
\[ P_T[n] = \frac{(rT)^n}{n!}e^{-rT} \]
Rate code v.s. temporal code
  • Dense firing for the former, sparse firing for the latter
  • Population code (a group of neurons firing)

Encoding / decoding

  • encoding: stimuli \(x(t)\) -> spikes \(\delta (t-t_i)\)
  • decoding: spikes \(\delta (t-t_i)\) -> interpretation of stimuli \(\hat x(t)\)

Neural Physiology

  • Neuron: dendrites, soma, axon
  • Synapses: neurotransmitter / electrical conduction
  • AP from axon => Graded potential in dendrite / soma
  • Temporal / spatial summation of graded potential: AP in axial hillock

Excitable membrane

  • Phospholipid bilayer (plasma membrane) as barrier
  • Integral / peripheral proteins: ion carriers and channels
  • Selected permeability to ions: Na / K gradients

Action potential

  • Voltage-gated Na channel: both positive and negative feedback (fast)
  • Voltage-gated K channel: negative feedback (slow)
  • Leaky chloride channel: helping maintaining resting potential (constant)
  • Refractory period (5 ms): available Na fraction is too low for AP
  • Nodes of Ranvier and myelin sheath: accelerates AP conduction

Neurotransmitters

  • Signaling molecules in the synaptic cleft
  • AP -> Ca influx -> vesicle release -> receptor binding -> graded potentials (EPSP/IPSP) -> recycle / degradation of neurotransmitters

Neural models

  • Features to reproduce: Integrating input, AP spikes, refractory period

Electrical activity of neurons

  • Nernst equation for one species of ion across a semipermeable membrane
  • GHK voltage equation for multiple ions
  • Quasi-ohmic assumption for ion channels \(I_x = g_x (V_m-E_x)\)
  • Membrane as capacitor (1 \(\mu F/ cm^2\))
  • Equivalent circuit: An RC circuit

HH model

  • GHK voltage equation not applicable (not in steady state)
  • Using Kirchhoff's current law to get voltage change over time
  • Parameters from experiments on the squid giant axon
  • K channel: gating variable n
\[ \begin{aligned} g_K &= \bar g_Kn^4 \cr \frac{dn}{dt} &= \alpha - n (\alpha + \beta) \end{aligned} \]
α and β are determined by voltage (membrane potential)
  • Na channel: two gating variables, m and h

$$
\begin{aligned}
g{Na} &= \bar g m^3h \cr
\frac{dm}{dt} &= \alpham - n (\alpham + \betam) \cr
\frac{dh}{dt} &= \alpha
h - n (\alphah + \betah) \cr
\end{aligned}
$$

`αs and βs are determined by voltage (membrane potential)`

Considerations

  • Model fidelity (biological relevance) vs simplicity (ease to simulate and analyze)
  • Biological plausibility

Dynamic system theory

A system of ODEs

e.g. Butterfly effect (chaos system): small deviation of initial conditions > huge different results

Morris-Lecar neuron model

  • Similar to the HH model (KCL)
  • Ca, K, and Cl ions
  • two state variables: voltage (V) and one variable (w) for K
  • using tanh and cosh functions
Phase plane analysis
  • Stability: Eigenvalues of rhs Jacobian matrix in the steady-state
  • External current (Ie) = 0: single stable steady-state (intersection of V and w nullclines)
  • Increasing Ie: shifting V null cline => unstable steady-state (limit cycle)
  • Bifurcation: V vs Ie

Integrate and fire (IF) model

  • A simple RC circuit
  • Single state variable (V)
  • Use of conditional statements to control spiking firing and refractory period
  • Used in nengo (plus leaky = LIF model)
  • Firing rate adaption: IF model + more terms

Izhikevich model

  • Two state variables
  • Realistic spike patterns by adjusting parameters
  • Could be used in large systems (100T synapses)

Compartment model

  • Spatial discretization for neuron models
  • Coupled RC circuits -> FEM grids

Filters

  • Presynaptic AP -> synapse neurotransmitter release -> Postsynaptic potentials
  • Approximated by an LTI(linear, time invariant) system
  • Linear: superposition
  • Time invariant: unchanged with time shifting
  • Impulse response: given a impulse (delta function) -> h(t), transformed results
  • Convolution: h(t) instead of the system itself
  • Fourier transform: Convolution -> multiplication

Synapse model

  • Synapse = RC low pass filters with time scale = \(\tau\)
  • \(\tau\) is dependent on types of neurotransmitter and receptors

Intro to brain

Prerequisite

  • Simple linear algebra (vector and matrix operations)
  • Graph theory: connections

Reverse engineering the brain

  • engineeringchallenges.org
  • Complexity, scale, connection, plasticity, low-power
  • Design: brain scheme; designer: natural selection

Why a brain

  • To survive and thrive.
  • Brainless (single-celled organisms): simple perceptions and reactions. Some endogenous activity
  • Simple brain (C. elegans): aversive response and body movement
  • Connectome routing study (as in EDA) showed 90% of the neurons are in the optimal positions
  • General scheme: sensory -> CNS -> motor (with endogenous states (thoughts) in the CNS)

Design constraints

  • Information theory (information efficiency)
  • Energy efficiency
  • Space efficiency
  • Human brain is already relatively larger than almost all animals

Evolution of the brain in Cordates

  • Dorsal neural tube -> differentiation respecting sensory, motor, and inter connections

Central pattern generator

  • The brainless walking cat: endogenous activity in the spinal cord
  • Main functioN unit in the CNS

nengo programming

Classes

  • Network: model itself
  • Node: input signal
  • Ensemble: neuronss
  • Connection: synapses
  • Probe: output
  • Simulator: simulator (literally)

Integrator implementation

  • Similar to the Euler method in numerical integration
\[ y[n] = A \{ y[n-1] + \Delta t x[n-1] \} \]
import matplotlib.pyplot as plt
import nengofrom nengo.processes
import Piecewise

# The model
model = nengo.Network(label='Integrator')

with model:
    # Neurons representing one number
    A = nengo.Ensemble(100, dimensions=1)

    # Input signal
    src = nengo.Node(Piecewise({0: 0, 0.2: 1, 1: 0, 2: -2, 3: 0, 4: 1,5: 0}))

    tau = 0.1

    # Connect the population to itself
    # transform: transformation matrix
    # synapse: time scale of low pass filter
    nengo.Connection(A, A, transform=[ [1] ], synapse=tau)
    nengo.Connection(src, A, transform=[ [tau] ], synapse=tau)
    input_probe = nengo.Probe(src)
    A_probe = nengo.Probe(A, synapse=0.01)

# Create our simulator
with nengo.Simulator(model) as sim:
    # Run it for 6 seconds
    sim.run(6)
# Plot the decoded output of the ensemble
plt.figure()
plt.plot(sim.trange(), sim.data[input_probe], label="Input")
plt.plot(sim.trange(), sim.data[A_probe], 'k', label="Integrator output")
plt.legend();
plt.show()

Oscillator implementation

Harmonic oscillator: one 2nd order ODE -> two 1st order ODEs

\[ \begin{aligned} \frac{d^2x}{dt^2} &= -\omega^2 x \cr \vec{x} &= \begin{bmatrix}x \cr \frac{dx}{dt} \end{bmatrix} \cr \frac{d\vec{x}}{dt} &= \begin{bmatrix}0 & 1 \cr -\omega^2 & 1 \end{bmatrix} \vec{x} = A \vec{x} \end{aligned} \]

nengo:

\[ \begin{aligned} \vec{x} &= \begin{bmatrix}x_0 \cr x_1 \end{bmatrix} \cr \vec{x}[n] &= \begin{bmatrix}1 & \Delta t \cr-\omega^2\Delta t & 1 \end{bmatrix} \cr \vec{x}[n-1] &= B \vec{x}[n-1] \cr \end{aligned} \]
import matplotlib.pyplot as plt
import nengo
from nengo.processes import Piecewise

# Create the model object
model = nengo.Network(label='Oscillator')

with model:
    # Neurons representing 2 numbers (dim = 2)
    neurons = nengo.Ensemble(200, dimensions=2)
    # Input signal
    src = nengo.Node(Piecewise({0: [1, 0], 0.1: [0, 0]}))
    nengo.Connection(src, neurons)
    # Create the feedback connection. Note the transformation matrix
    nengo.Connection(neurons, neurons, transform=[ [1, 1], [-1, 1] ], synapse=0.1)

    input_probe = nengo.Probe(src, 'output')
    neuron_probe = nengo.Probe(neurons, 'decoded_output', synapse=0.1)

# Create the simulator
with nengo.Simulator(model) as sim:
    # Run it for 5 seconds
    sim.run(5)

plt.figure()
plt.plot(sim.trange(), sim.data[neuron_probe])
plt.xlabel('Time (s)', fontsize='large')
plt.legend(['$x_0$', '$x_1$'])

data = sim.data[neuron_probe]
plt.figure()
plt.plot(data[:, 0], data[:, 1], label='Decoded Output')
plt.xlabel('$x_0$', fontsize=20)
plt.ylabel('$x_1$', fontsize=20)
plt.legend()
plt.show()

Connectivity analysis

  • Structural: anatomical structures e.g. water diffusion via DTI
  • Functional: statistic, dynamic weights
  • Effective: causal interactions (presynaptic spikes -> postsynaptic firing)
  • ref. 因果革命

Microscale vs Macroscale

  • Microscale: um ~ nm (synapses)
  • Macroscale: mm (voxels) coherent regions

Graph theory

  • Node: brain areas (or neurons)
  • Edges: connections (or synapses)
  • Represented by adjacency matrices (values = connection weights)

Types of networks

  • Nodes in a circle; Connections in an adjacency matrix
  • Measure: degrees of a node (inward / outward) / neighborhood (Modularity Q, Small-worldness S)
Random

Same edge probability

Scale-free
  • Power law
  • Fractal
  • Increased robustness to neural damage
Regular
  • Local connections only
Modular
  • hierarchial clusters
  • Built by attraction and repulsion between nodes
  • In some biological neural networks
Small world
  • Similar to social networks, sparse global connections
  • A few hubs (opinion leaders) with high degrees (connecting edges)
  • Rich hub organization in biological neural networks (10 times the connections to the average)
  • Anatomical basis (maximize space / energy efficiency)

Neural Engineering Framework (NEF)

  • By Eliasmith
  • Intended for constant structures without synaptic plasticity
  • Compared to SNNs (with learning = synaptic plasticity)
  • Neural compiler (high level function <=> low level spikes)

Central problems

  • Stimuli detection (sensors)
  • Representation / manipulation of information (sensory n.)
  • As spikes (pulse density modulation = PDM)
  • Recall / transform (CNS)

Heterogeneity in realistic neural networks

  • Different set of parameters for each neuron in response to stimuli
  • Represented as tuning curves

Building NEF models with nengo

  • Hypothesis / data / structure from the real counterpart
  • Build NEF and check behavior
  • Rinse and repeat

Central NEF principles

Representation
  • Action potential: digital, non-linear encoding (axon hillock)
  • Graded potential: analog, linear decoding (dendrite)
  • Compared to ANNs:
  • dendrite = weighted sum from other neurons
  • axon hillock: non-linear activation function (real number output)
  • Examples: Physical values: heat, light, velocity, position
  • mimicking sensory neurons = transducer producing pulse signals
Transformation of encoding information by neuron clusters
Neural dynamics for an ensemble of neurons

HH model, LIF, control theory

PS
  • Neurons are noisy
  • In the NEF: the basic unit is an ensemble of neurons
  • Post synaptic current: approximated by one time constant

Neural representation

Encoding / decoding

  • Ensemble = Digital-analog converter like digital audio processing

Symbols used when neural coding

  • x: strength of external stimuli
  • J(x): x-induced current
  • \(a(x) = G[J(x)]\): firing rate of spikes ≈ activation function in ANNs
  • Most important parameters
  • \(J_{th}\) (threshold current)
  • \(\tau_{ref}\) (refractory period → maximal spiking rate)

Population encoding

A group of neurons determine the value by their spikes collectively.
Contrary to sparse coding.

Some linear algebra
  • Any vector could be decomposed as an unique linear combination of basis vectors
  • The most convenient ones are orthogonal bases e.g. sin / cos in Fourier series
  • The stimuli through the ensemble could be estimated from the linear combination of weights of neurons with different tuning curves
  • Simplest : two neuron model (on and off)
  • Adding more and more neurons differing in tuning curves (more bases) = more accurate representation
Optimal ensemble linear encoder
  • Calculated by solving a linear system
  • Nengo derives the best set of weights for an ensemble of neurons automatically
  • Adding Gaussian noise in fact enhanced the robustness of the matrix of tuning curves

Example: horizontal eye position in NEF

  • System description
  • Max firing rate = 300 Hz
  • On-off neurons
  • Goal: linear tuning curve
  • How neurons work in abducent motor neuron: an integrator
  • Populations, noise, and constraints
  • Solution errors associated to the number of neurons
  • Noise error
  • Static error
  • Rounding error

Vector encoding / decoding

  • Similar to the scalar case, but replaced with vectors
  • Automatically handled by the nengo framework

Nengo examples

  • RateEncoding.py
  • ArmMovement2D.py

Neural transformation

  • Linear
  • Non-linear
  • Weighting: positive (excitatory) / negative (inhibitory)

Multiplication

  • Controlled integrator (memory)
  • ref: Multiplication.py
  • Traditional ANN counterpart: Neural clusters A and B fully connected to combination layer, respectively
  • Making a subnetwork: factory function

Communication channel

  • Output of one ensemble => Input of another ensemble
  • Traditional ANN counterpart: fully-connected layers
  • \(w_{ji} = \alpha_je_jd_i\)
  • nengo: simply Connection(A, B)

Static gain c (multiplication with a scalar)

  • \(w_{ji} = c\alpha_je_jd_i\)
  • nengo: Connection(A, B, transform=c)

Addition

  • c = a + b
  • nengo: Connection(A, C); Connection(B, C)
  • Adding two vectors: just change dimension

Nonlinear transformation

  • nengo: define a vector transformation function f => Connection(A, B, function=f)

Negative weight

  • An ensemble of inhibitory neurons

Neural dynamics

  • Neural control systems: non-linear, time-variant (modern control theory)

Representation

  • 1st order ODEs
  • State variables as a vector
  • \(\mathbf{x}(t) = \mathbf{x}(t - \Delta t) + f(t - \Delta t, \mathbf{x}(t - \Delta t))\)
  • Example: cellular automata finite state machine (Game of life)

Linear control theory

u: input, y: output, x: internal states
\(\mathbf{\dot{x}}(t) = A \mathbf{\dot{x}}(t) + B \mathbf{u}(t)\)
\(\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)\)

Frequency response and stability analysis

  • Laplace transform \(L\{f(t)\} = \int^\infty_0e^{-st}f(t)dt = F(s)\)
  • Impulse response: \(h(t) = \frac{1}{\tau}e^{-t/\tau}, \ H(s) = \frac{1}{1 + s\tau}\). Stable (pole at the left half plane)
  • Convolution in the time domain = multiplication in the Laplace (s-domain)

Neural population model

  • Linear decoder for post-synaptic current (PSC)
  • \(A^\prime = \tau A + I\)
  • \(B^\prime = \tau B\)

Recurrent connections

  • Positive feedback: Feedback1.py
  • Negative feedback: Feedback2.py (without stimuli), Feedback3.py (with stimuli)
  • Dynamics: Dynamics1.py and Dynamics2.py: step stimuli + feedback
  • Integrators: \(A = \frac{-1}{\tau} I\)
  • Oscillators: \(A = \begin{bmatrix} 0&1 cr\ -\omega^2&0 \cr \end{bmatrix}\)

Equations for different levels

  • Nengo: higher level
  • Implementation: lower rate / spiking levels

Sensation and Perception

Environment (stimulation) (analog signal) -> sensory transduction (feature extraction) -> impulse signal (sensory nerve) -> perceptions (sensory cortex) -> processing (CNS) -> action selection (motor cortex) -> impulse signal (motor nerve) -> acuator(e.g. muscle) -> action

Perception

  • Internal representation of stimuli impulses
  • The experience in the association cortex (not necessary the same as the outside world)
  • Book: making uo the mind

Psychophysical

e.g. Psychoacoustics: used in MP3 compression
* Threshold in quiet / noisy environment
* Equal-loudness contour in different frequencies
* Weber's law: change perceived in percent change \(S = klg\frac{I}{I_0}\)

Vision

  • Convergence of information inside retina
  • 260M photoreceptor cells indirectly connected to 2M ganglion (optic nerve) cells)
  • Dimension reduction (pooling / convolution)
  • Need of learning to see (mechanism of amblyopia): Neural wiring in the visual tract and the visual cortex (training of CNNs)

V1: primary visual cortex

  • Detection of oriented edges, grouped by cortical columns with sensitivity to different angles
  • Similar to the tuning curve in NEF

Successively richer layers

Optic nerve -> LGN (thalamus) -> V1 -> V2 / V4 -> dorsal (metric) or ventral (identification) tracks

  • Feature extraction
  • Similar to convolutional neural network (CNNs)
  • Demonstrated in fMRI

Ventral track

  • What is the object?
  • V2 / V4 -> Post. Inf. temporal (PIT) cortex -> Ant. Inf. temporal (AIT) cortex
  • PIT: More complex features e.g. fusiform face area for fast facial recognition
  • AIT: Classification of objects regardless of size, color, viewing angle...
  • Hyperdimensional vector (EECS) = semantic pointer (NEF)
  • Neural ensemble of 20000 in monkeys
  • Thus the functions of the temporal lobe = categorizing the world:
  • Primary and associative auditory
  • Labeling visual objects
  • Language processing for both visual and auditory cues
  • Episodic memory formation by hippocampus

Dorsal track

  • Where is the object?
  • V1 -> V2 -> V5 -> parietal lobe (visual association area)
  • metrical information and mathematics
  • Motion detection and information for further actions

Ambiguous figures / optical illusions

Forms 2 attractors (interpretations)

e.g Necker cube

Feedback

  • External cue and expectation (top down perception)
  • Report to LGN about the error

Object perception

  • In biology: robust recognition despite color, viewing angle differences (object consistency)
  • View-dependent frame of reference vs. View-invariant (grammar pattern) frame of reference

Autoencoders

Ewert's central problems

  • Perception: encoding stimuli from analog to digital spikes
  • Central processing: transformation and recall of information, action selection
  • Action execution: decoding digital spikes to response

Autoencoder in traditional ANNs

  • Compressing the input into a smaller (dim.) representation then expand to the estimation
  • Hyper dimension vector in CS
  • Semantic pointer in NEF
  • Novelty detection: comparison of the input to the output from trained autoencoder

Basic machine learning

  • For y = f(x), find f
  • Training, testing, validation sets
  • Learning curves: overfitting if overtraining
  • Cross validation to reduce overfitting and increase testing accuracy
  • K-fold cross validation
  • SVM: once worked better than ANNs
  • Converting low dim but complex border to higher dim. simpler (even linear) border by transformation of data points

Classical cognitive systems (expert system)

  • Symbols and syntax processing (LISP)
  • Failed due to low BP (unable to solve to meaning of symbols)
  • Another attempt: connectionist (semantic space) => too complex
  • Symbol binding system: 500M neurons to recognize simple sentences (fail)
  • Until the semantic pointer hypothesis: explaining high level cognitive function
  • Halle Berry neurons (grandmother neurons): highly selective to one category instances (sparse coding)
  • However most instances are population coding

Semantic pointer and SPA

  • Equals to hyperdimensional vector in the mathematical sense
  • Presented by an ensemble of neurons in biology
  • The semantic space (hyperdimensional space) holds information features
  • Needs enough dimensions for the overwhelming number of concepts in the world
  • Pointers = symbols = general concepts
  • Indirect addressing of complex information
  • Shallow and deep manipulation (dual coding theory)
  • Efficient transformation (call by address)
  • Shallow semantics (e.g. text mining): symbols and stats only, does not encode the meaning of words
  • Nengo: nengo-spa

Encoding information in the semantic pointer

Circular convolution for syntax processing
* Readily extract the information in SP after filtered some noise
* Does not incur extra dimensions
* Works on reals numbers (XOR works on binaries only)
* Solves Jackendoff's challenges
* Binding problem : red + square vs green + circle
* Problem of 2: small star vs big star
* Problem of variable: blue fly (n.) vs. blue fly(v.): binding restrictions
* Binding in working memory vs long-term memory

One could combine multiple sources of input (word, visual, smell, auditory)

Action control

Behavioral pattern / coordination

Affordance competition hypothesis

  • Affordance part: continuously updating the status
  • Competition part: select best action by utility (spiking activity)
    In biology:
  • Premotor / supplementary motor cortex
  • Weighted summation of previously learned motor components (basis functions) -> desired movement
  • Primary motor cortex
  • Basal ganglia
  • Caudate, putamen, globus pallidus, SN
  • Excitation and inhibitory projections
  • Dopaminergic neurons: reward expectation: reinforcement learning
  • Movement initiation
  • Direct, indirect, and hyperdirect pathways
  • Cerebellum
  • Learning and control of movements
  • Error-driven (similar to back propagation): supervised learning
  • Hippocampus: self-organizing (Hebbian, STDP): unsupervised learning

Neural optimal control hierarchy (NOCH)

Computational model by students of Eliasmith, including:
* Cortex (premotor)
* cerebellum
* basal ganglia
* motor cortex
* brain stem and spinal cord

Performing movement in robot arms

  • Joint angle space [θ1, θ2, ...]: degree of freedom
  • Operational space (end point vector)

High level -> mid level -> low level control signals

Similar to the latter half of autoencoder.

Functional level model

Loop of
* Cortex: memory / transformations, crude selection
* Basal ganglia: utility -> action (cosine similarity)
* Thalamus: monitoring

Rules for manipulation

  • Symbols, fuzzy logic, but not compatible to neural networks
  • Basal ganglia: manipulation
    $$
    \vec{s} = M_b \cdot \vec{w}
    $$
  • Rehearsal of alphabet sequence.py

Attention

Timing of neuron's response: ~15ms delay to make decision.

The less utility difference, the longer the latency.

  • Parametric study on computational models

Tower of Hanoi task

  • Perceptual strategy from symbolic calculation is not biologically plausible in Eliasmith paper (not learning the rule).
  • 150k neurons

ACT-R architecture

Symbol -> neural networks

Comparative to fMRI BOLD signal.

Learning and memory

Ref: Neuroeconomics, decision making and the brain.

Learning: stimulus altered behavior. Not hardwired.

Memory: storage of learned information.

Learning in biology

  • Neural level: synapse strength, neural gene expression
  • Brain regions: coordination

Machine learning

  • Weight changes in synaptic connections
  • Neural activity states: dynamic stability (attractor)

Biological memories in detail

  • Declarative (explicit) memory: medial temporal lobe and neocortex
  • Events (episodic): 5W1H, past experience
  • Facts (semantic): grammar, common sense (context-free)
  • Non-declarative memory
  • Procedural: basal ganglia
  • Perceptual priming: short path for recall for previous stimuli
  • Conditioning: cerebellum
  • Non-associative: reflex
  • Sensory memory: buffer
  • 9-10 sec for schoic (hearing)
  • 0.5 sec for iconic (vision)

Conditioning

  • Pavlov's dog: classical conditioning
  • Skinner: operant conditioning
  • Acquisition, extinction, spontaneous recovery (long-term memory)
Terms
  • Memory: recall / recognize past experience
  • Conditioning: associate event and response
  • Learning: change behavior to stimuli
  • Plasticity: change neural connections
  • Functional: chemical connection change
  • Structural: physical connection change
Hippocampus

Dentate gyrus -> CA3 -> CA1
* Long-term potentiation (LTP) upon high freq stimulation: enhances EPSP
* Long-term depression (LTD) upon los freq stimulation: inhibits EPSP
* Neural growth even at 40 y/o

Inside LTP / LTD

Neurotransmitters
* Glutamate (AMPAR, NMDAR) : excitatory
* GABA: inhibitory

Second messengers (mid-term effects)

Learning rules

Hebbian
  • Freud -> Hebb (1949): fire together, wire together

$
\Delta w = \epsilon\gammai\gammaj
$

\(\epsilon\): learning rate

\(\gamma_i\): postsynaptic firing rate

\(\gamma_j\): presynaptic firing rate

STDP
  • Spike-time-dependent plasticity from experimental data
  • Pre synaptic spike then post one: LTP
  • Post synaptic spike then pre one: LTD
hPES rule

Limitations on weight change

\[ \Delta w_{ij} = \alpha_ja_{j}(k_1e_jE + k_2a_i(a_j - \theta)) \]

Reinforcement learning

E.g. operant conditioning (Skinner)

Value
  • Expected value \(E[ x ]\)
  • Expected utility \(U(E[ x ]) \approx log(E[ x ])\)
  • Basic axiomatic form (Pareto)
  • Weak axioms of revealed preference (WARP)
  • Generated axioms of revealed preference (GARP)
Value function V(s) and prediction error

\(V_{k+1}(s_k) = (1-\alpha)V_k(s_k) + \alpha\delta_k\)

Error: \(\delta_k = r_k - V_k(s_k)\)

For multiple stimuli: Rescorla-Wagner model

\(V_k^{net} = \Sigma V_{k}(stim)\)

Biological RL

Dopamine reward pathway for movement and motivation.

Increased dopamine secretion for a sudden reward. The same as Error: \(\delta_k = r_k - V_k(s_k)\)

Decision making
  • Problem: no immediate feedback (reward) => need to think about the future and maximize aggregate reward
  • Bellman equation: reduction of recursive reward with temporal difference (\(V_k(S_{t+1})- V_k(S_t)\))

\(V(S_t) = r(S_t) + E[V(S_{t+1})|S_t]\)

\(\delta_t = r_t + V_k(S_{t+1})- V_k(S_t)\)
* Markov decision process
* Q learning
* Q function \(Q(s, \pi)\)
* Policy \(\pi(s)\): mapping state to actions

\(Q_{t+1}(S_t, a_t) = Q_{t}(S_t, a_t) + \alpha\delta_t\)

\(\delta_t = r_t \gamma_{max}Q_{t+1}(S_t, a_t) - Q_{t}(S_t, a_t)\)

SPAUN model

SPAUN = Semantic pointer architecture unified network, all things put together

  • Single perceptual system (eye)
  • Single motor system (arm)
  • Background knowledge (SPA)
  • Abilities
  • Similar to human in working mem limitations (3-7)
  • Behavior flexibility
  • Adaptation to reward
  • Confusion to invalid input

Intro to mechanobiology

Course Notes of Introduction to mechanobiology.

Course information

Reference articles

Physical regulation of cells and tissues

  • cosmonaut: osteoporosis due to microgravity
  • Right arm bigger than the left (tennis player)
  • shape of liver cells (polygons) vs muscle cells (elongated cylinders)
  • Tissues / cells could sense physical cues
  • tension, compression, fluid flow (hydrodynamic pressure, shear stress etc.), osmotic pressure, ion current
    • Laminar flow (low Reynaud's number) vs turbulent flow => different endothelial response inside blood vessels
  • Sensed by mechanosensor complexes attached to both cytoskeletons inside cell and ECM outside, affecting gene expression, electrophysiology, etc.
  • Heterogenous structure and stiffness inside cartilage (from synovial surface to bone surface)

Clinical relevance of mechanobiology

  • deafness: cochlear hair cells
  • arteriosclerosis: endothelial and smooth muscle cells
  • muscular dystrophy and cardiomyopathy: myocytes and fibroblasts
  • Congenital muscular dystrophy due to mutations in dystrophin (part of anchor for cytoskeleton)
  • Sarcopenia in the elderly
  • Aspect ratio deviation in dilated / hypertrophic cardiomyopathy
  • Weight training : NADPH oxidase produces ROS in response to tension => muscle growth (young people) or death (old people)
  • Cartilage: chondrocytes
  • Spatial difference in cell alignment, ECM composition (stiffer towards the bone surface)
  • Running -> compression -> stimulates ECM and cell growth
  • Axial myopia and glaucoma: optical neurons, fibroblasts
  • Polycystic kidney disease (PCKD): epithelial cells of renal tubules
  • Cancer: cancer cells and cancer-associated fibroblast (CAF)
  • Adhesion to blood vessel walls: WBC (neutrophils, monocytes, macrophages)

Cellular biology 101

Cell structures, cell cycle and replication, central dogma and information flow, signal transduction and homeostasis.

  • DNA structures: H-bond => thermodynamically stable in antiparallel double helix, good for information storage
  • RNA structure: more reactive (2' hydroxyl group), not as stable as DNA, good for reaction catalysis (e.g. ribozymes) and carrying transient information (e.g. messenger RNA)

Information flow in mechanobiology

Outside-in vs inside-out

  • Outside-in: force transduction (directly or via ECM) - transducer (complex) - signal transduction cascade (amplifiers, filters, logical gates) - gene expression involving cell cycle, metabolism, and survival
  • Inside-out: Cells actively tug the external environment by motor proteins and cytoskeletons - determination of external physical cues (e.g. stiffness) - cell growth and differentiation

Lipid rafts in the plasma membrane

  • Rich in cholesterol (less fluidity) and glycolipids
  • Rich in cytoskeleton anchor complexes and mechanoreceptor

Cytoskeletons

  • Actin: stress fibers, with motor proteins (e.g. myosin), polarity(+)
  • Microtubule: compression fibers, consisting tubulin alpha and beta, polarity(+), with motor proteins (kinesin and dynein)
  • intermediate filaments (keratin filaments): study and less dynamic, buffer between the nucleus and the cell surface

Cell-cell junctions

  • Tight junctions: preventing leakage from apical side to basal side
  • Gap junctions (channeling two adjacent cells)
  • Desmosomes: cadherins (requires divalent cations), connected to keratin filaments (structure support)
    They know their spatial arrangement (basal vs apical)

Cell-matrix junctions

  • Reference book: mechanobiology of cell-cell and cell-matrix interactions
  • Basal lamina (ECM) - integrins (transmembrane part) - anchor complex - actin
  • ECM:
  • glycosaminoglycans (GAG): extremely hydrophilic, providing compressive strength
  • fibrous proteins (e.g. collagen): tensile strength
  • Example of neonatal rat cardiomyocyte growth and development
  • Soft surface (100~300 Pa): round and undifferentiated
  • Native environment in the heart (10 kPa): cylindrical with the best aspect ratio (7:1) with sarcomeres.
  • Stiff surfaces (glass): flat and polygonal

Crosstalk of cell-cell junctions and cell-matrix junctions

  • Cadherin and integrin pathways
  • Cellular movement, differentiation, and growth

Cartilage tissue structure and homeostasis

  • chondrocyte and ECM interactions
  • influenced by physical forces (pressure, shear stress)
  • Spatial heterogeneity of ECM composition (stiffness) and cell arrangement (clustering and orientation)

Mechanotransduction

Information flow

external stimulation -> outside-in -> processing -> inside-out -> cellular response (behavior)

Biological signal processing in a cell as a black box ?

  • Phenotype-dependence: same ligand + different context (cell type, receptor) = different response

Inside-out signaling

  • Altered protein function (activation/ deactivation): ms to secs
  • Altered gene expression (protein synthesis): hours to days
  • Central dogma: DNA -> mRNA -> protein (nowadays with a lot of regulations)
  • Gene expression level \(\approx\) mRNA content \(\approx\) protein activity (e.g. RNAseq)

Outside-in signaling

  • Extracellular signal -> transmembrane receptor -> intracellular relays, amplifiers, modulators...

Receptors

  • Ion-channel-coupled:
  • NMDA receptor: opens Ca channel when binds to glutamate
  • MET channel in cochlear hair cells: opens K channel when stretched
  • G-protein-coupled: a lot of targets
  • Enzyme-linked: e.g. EGFRs, JAK-STAT

Relays and amplifiers:

  • second messengers (Ca, IP3, DAG, cAMP, ...), kinase cascades
  • A complex network of signal transduction pathways => bioinformatics

Molecular switches

  • phosphorylation by kinase / dephosphorylation by phosphatase
  • GTP-binding: GDP->GTP by replacement. GTP -> GDP by phosphatase activity
  • GEF: GTP exchange factor
  • GAP: GTPase-activating protein

Signal transduction pathways (simplified)

GTP-linked receptors

  • Ligand binds to receptor, then the LR-complex binds to G protein

  • G protein replace GTP for GDP and dissociate from LR-complex

  • alpha subunit of G-protein dissociates from beta-gamma subunits

  • In Gs protein, the alpha subunit activates CA, converting ATP to cAMP (2nd messenger) for the cascade

  • In Gq protein, the beta-gamma subunits activates PCL-beta, cleaving a special phospholipid (PIP2) to IP3 and DAG, which in turn activate Ca release from ER and activate PKC for the cascade.

Mechanoreceptor and transduction

  • Stretch-activated ion channels
  • Integrins
  • E-cadherins e.g. fluid shear stress -> TF (β-catenin) to nucleus
  • Physical forces affects gene transcriptions (exp: movement of transcription factors after physical stress)
  • Stretching peptides -> exposure of folded AA residues -> signals (does not require a living cell)
  • Compression of chondrocytes: heterogenous, anisotropic strains and (probably) stress
  • Mechanosensing by adhesion site recruitment and stretching cytoskeletons -> substrate component and stiffness.
  • Chromatin deformation by force changes their relative positions and could alter gene expressions.
  • Force could change gene expression directly!
  • Osmotic loading of chondrocytes: changes in osmolarity => altered chromatin structure

Solid Mechanics Primer

A crawling cell uses pseudopod and forward attachment point to move forward.

Rigid body approach

  • Sum of moment = 0, Sum of external force = 0 (Newton 1st law)
  • Or applying Newton 2nd law
  • Free body diagram (reaction force: force exerted to the cell)
  • But cells are deformable: GG

Deformable cell

  • Displacement field (\(\Delta x\) inside the cell) is not uniform
  • Displacement is related to mechanical properties (stiffness)
  • Resolution down to molecular level is too much. Treat the cell as a continuum of infinitesimal elements (~100nm)

Stress

  • Scaled force, averaged by area (the same unit as pressure), affected by shape
  • A tensor described by two vectors
  • the force
  • the normal vector of the plane
  • \(\sigma_{xy}\) : On the yz plane (normal vector x), force with y direction
  • Normal stress: force parallel to the normal vector (tensile and compressive)
  • Shear Stress: force perpendicular to the normal vector
  • Mixed: decompose to the two above first

Strain

  • Displacement rescaled (normalized) by the original length
  • Averaged deformation (dimension-less)
  • Axial strain: \(\epsilon = \frac{\Delta L}{L}\), engineering strain, assuming \(\Delta L \ll L\)
  • Shear strain: \(\gamma = \frac{\delta}{L} = tan\theta \approx \theta\)
  • Transverse strain: Poisson's ratio \(\nu = -\epsilon_t / \epsilon_a > 0\)

Stress-Strain relationships

  • Linear (Young's modulus) -> nonlinear -> yield point (plastic change) -> ultimate -> break

Stress and strain fields

  • Average force (stress) adn average displacement (strain) in the cell
  • Force and torque equilibrium (assuming little acceleration and rotation)
  • For linearly elastic materials: 6 independent components
  • Experimental results: displacement field -> What we want: stress fields
  • Applying stress-strain relationships (Young's modulus, Poisson ratio, ...)

Stress on a (linearly elastic) material

  • Body force is insignificant to surface force
  • Large surface-to-volume ratio in small scales
  • Decompose surface forces on a small cube to tensors (x, y, z)
  • \(\sigma_{jj} = \lim_{A \rightarrow0}\frac{S_{jj}}{A}\), \(\tau_{ij} = \lim_{A \rightarrow0}\frac{S_{ij}}{A}\)

  • Equilibrium of stress

  • Take 1st Taylor expansion of surface forces related to certain directions
  • Sign convention: negative sides take negative values
  • Tensor representation: \(\sigma_{ij, j} = 0\) (Balance of volume forces)
  • \(\sigma_{ij} = \sigma_{ij}\) due to moment balance

Kinematics

  • Converting displacement to strain
  • Normal strain \(\epsilon_{ii} = \frac{du}{di}\)
  • Shear strain \(\theta \approx tan \theta = \frac{du}{dj}\)
  • \(\gamma = tan \theta \approx \theta\)
  • continuous shear strain \(\epsilon_{ij} = \gamma_{ij} / 2\)
  • Symmetry: \(\epsilon_{ij} = \epsilon_{ji}\), 6 independent strains

Constitutive equations

6 stress and 6 strains = 36 parameters

For a linearly elastic material under small strain (< 1%)

  • Young modulus E: \(\sigma_{jj} = E\epsilon_{jj}\)
  • Shear modulus G: \(\tau_{ij} = \sigma_{ij} = G\gamma_{ij} = 0.5G\epsilon_{ij}\)
  • Poisson ratio \(\nu\) : \(\nu = -\epsilon_{jj} / \epsilon_{ii}\). For biomaterials = 0.5
  • G = E / 2 (1 + ν) for small strain

Homogeneity

The scale we concern is much larger than the irregularities in the material.

e.g. A collagen gel is homogenous in the scale of mm, not nm.

Isotropy

In either direction, the response is the same.

Traction vector

  • Forces along the plane with xyz components
  • Force equilibrium: Traction forces = stresses * areas
  • Could be represented in matrix form
  • Use: displacement (beads) -> strain -> stress -> traction force
  • With Green function (complicated)

Large deformation (>1%)

Deformation gradient (F)

\(\vec{B} = F\vec{A}\) for \(\vec{A}\) deforms to \(\vec{B}\)

Principle directions of deformation

Find eigenvalues and eigenvectors of \(e = 0.5(FF^T-I)\)

  • eigenvectors: principle directions
  • eigenvalues: strain

Rheology

  • Fluid mechanics
  • Viscoelasticity: a subset

Fluid

  • Shear stress -> continual deformation (flow)
  • Defined by density (ρ) and viscosity (η)
  • Increased viscosity = harder to push sideways

Viscosity

  • 1 poise = 0.1 Ns/m^2
  • water = 0.001 Ns/m^2
  • Newtonian fluid : viscosity independent of shear stress
  • Linear flow profile
  • $\(\tau = \eta \frac{du}{dy}\)
  • The latter (\(\frac{du}{dy}\)) is called shear strain rate and velocity gradient

Stress balance inside a fluid

  • Internal friction (viscosity) and external force (stress)
  • Shear strain \(\gamma = \frac{\Delta x}{dy}\)
  • Shear strain rate (velocity gradient) \(\frac{du}{dy} = \frac{d}{dt}(\frac{\Delta x}{dy})\)

Microscopic model of viscosity

  • Particles move at different speeds at different layers
  • They also diffuse and bump the neighboring ones due to the speed difference.
  • Friction is proportional to velocity gradient (shear strain rate)

Non-Newtonian fluid

https://en.wikipedia.org/wiki/Non-Newtonian_fluid
* Viscosity is dependent on velocity gradient
* Blood: Binham fluid (flows only when the shear strain rate greater than the threshold)
* Ketchup: Shear thinning = pseudoplastic
* Corn starch with water: shear thickening = dilatant

Viscosity fluid's strain in response to oscillatory stress

https://en.wikipedia.org/wiki/Viscoelasticity

\(\sigma = \sigma_0cos(\omega t)\), \(\omega = 2 \pi f\)
* Similar to AC circuits
* Elastic: in-phase
* Viscosity: causing phase lag up to 90 degrees

Complex modulus (by Euler formula)

\(e^{ix} = cos(x) + isin(x)\)

  • Stress: \(\sigma^* = \sigma_0e^{i\omega t}\)
  • Strain: \(\epsilon^* = \epsilon_0e^{i(\omega t - \delta)}\)
  • Modulus: \(E^* = \frac{\sigma_0}{\epsilon_0}e^{i\delta} = E_1 + iE_2\)
  • Storage / elastic modulus: \(E_1 = \sigma_0cos\delta / \epsilon_0\)
  • Loss / damping modulus: \(E_2 = \sigma_0sin\delta / \epsilon_0\)
  • Complex shear modulus: \(G^* = \frac{\tau^*}{\gamma^*}\)

Hysteresis

  • Viscous component: transforms mechanical energy into heat
  • In biomaterials, the loop is repeatable and independent of loading rate

Creep

  • Strain increases when holding constant stress
  • Reorganization of molecules
  • Movement of water (in most biomaterials)

Stress relaxation

  • Stress decreases when holding constant strain
  • Reorganization of molecules

Viscoelasticity Models

https://en.wikipedia.org/wiki/Viscoelasticity#Constitutive_models_of_linear_viscoelasticity

  • Springs ( \(\sigma = E\epsilon\) ) and dashpots ( \(\sigma = \eta \frac{d\epsilon}{dt}\) )
  • Series: same stress, summing strain
  • Parallel: same strain, summing stress

Fluid mechanics

Reference
Nelson biological physics ch5

Difference between solid and fluid mechanics

  • Solid: inertia and acceleration, time-dependent, no convection (-), constant density
  • Fluid: Convection (+), may have variable density
  • Lamellar flow: no inertia or time dependent terms involved

Force balance inside a fluid element

  • Pressure (normal stress)
  • Friction, viscous force

General assumptions

  • Steady flow: force balanced
  • Newtonian fluid: viscosity does not dependent on shear rate
  • Incompressible: constant density

Acceleration of the fluid

\(\(dv(x, t) = v(x + dx, t + dt) - v(x, t)= \frac{\partial v}{\partial x}dx + \frac{\partial v}{\partial t}dt\)\)
\(\(dv(x, t) = \frac{\partial v}{\partial x}vdt + \frac{\partial v}{\partial t}dt\)\)
\(\(a(x, t) = \frac{dv(x, t)}{dt} = \frac{\partial v}{\partial t} + \frac{\partial v}{\partial x}v\)\)
Former term: solid mechanics, latter term: fluid convection

In 3D space:
\(\(a = \frac{\partial v}{\partial t} + v \cdot \nabla v\)\)

Net pressure force

Notice the negative number (force is in the opposite direction of pressure gradient)
\(\(\delta f_x^p \approx \frac{-\partial p}{\partial x} dx_1dx_2dx_3\)\)
\(\(\delta p \approx (-\nabla p) dx_1dx_2dx_3\)\)

Viscosity and Newtonian fluid

Shear stress: \(\tau = \eta \frac{du_1}{dx_2}\) : linear flow profile between parallel plates
Shear force:
\(\(f_x^v(x_1, x_2) = -\tau dx_1dx_3 = -\eta\frac{\partial v_1(x_2)}{\partial x_2} dx_1dx_3\)\)
\(\(f_x^v(x_1, x_2 + dx_2) = \eta\frac{\partial v_1(x_2 + dx_2)}{\partial x_2} dx_1dx_3\)\)
\(\(f_{x_1x_2}^v \approx \eta \frac{\partial^2v_1}{\partial x_2^2} dx_1dx_2dx_3\)\)
(Second Taylor expansion)
Net shear force:

\[\delta f_v = \eta \nabla^2 v dx_1dx_2dx_3\]

Incompressibility

Linear strain:
\(\(d\epsilon_x = \frac{\Delta x^\prime - \Delta x}{\Delta x} = \frac{dx (\frac{\partial v_x}{\partial x})dt}{dx} = \frac{\partial v_x}{\partial x}dt\)\)

Size change in 3D space:
\(\(dx_1dx_2dx_3(1 + (\frac{\partial v_1}{dx_1} + \frac{\partial v_2}{dx_2} + \frac{\partial v_3}{dx_3})dt)\)\)

Incompressible: \(\nabla \cdot v = 0\), i.e. divergence of velocity field = 0

Newton's second law

\(\(F = ma = \rho dx_1dx_2dx_3 Y_i + \delta f^p + \delta f^v\)\)
* 1st term: body force (e.g. gravity)

We get the Navier-Stoke equation:
\(\(\rho (\frac{\partial v}{\partial t} + (v \cdot \nabla) v) = \rho Y - \nabla p + \eta \nabla^2 v\)\)

  • \(\frac{\partial v}{\partial t}\): Solid acceleration
  • \((v \cdot \nabla) v\): fluid convective term
  • Y: body force
  • \(\nabla p\): pressure term
  • \(\eta \nabla^2 v\): viscous term

Microscopic model of fluid friction

Velocity gradient across adjacent layers plus particle diffusion => momentum exchange and frictional drag

Particle drift and friction law (in small Re)

\(F = \zeta v\), \(\zeta\): drag coefficient
* Stokes law (for spherical objects): \(\zeta = 6 \pi \eta R\)
* Electrophoresis: \(F = q\epsilon = \zeta v\)
* Sedimentation of colloid particles: \(f_s = -m_{net}g = \zeta v\)

Reynolds number (Re)

  • Dimensionless property
  • Fluid runs around a particle
  • acceleration = \(\frac{v^2}{R}\)
  • viscous force = \(\eta\frac{v}{R^2}\)
  • Substitute into Navier-Stoke equation: \(\frac{\rho v R}{\eta} = \frac{R^2}{\eta v}f_{ext} + 1\)
Large Re
  • Dominated by inertia
  • Fluid is mixed, turbulent flow with vortices
  • Examples: human in water, rockets
  • \(f_{ext} \approx \rho \frac{v^2}{R}\)
Small Re
  • \(\frac{\rho v R}{\eta} \ll 1\), \(f_{ext} \approx \frac{\eta v}{R^2}\), drifting velocity proportional to drag force.
  • Dominated by viscous drag, laminar flow (Re < 10)
  • Acceleration and inertia term extremely small, time reversible
  • Examples: bacteria in water, dyes in corn syrup
  • Reciprocal motion does not work (due to time reversibility)
  • Periodic movement (cilia) and rotational movement (flagella) break the symmetry of the drag coef. (and thus the drag force) and create propulsion.

Motion of fluid between parallel plates in small Re

  • Applying the Navier-Stoke equation, ignoring the acceleration and body force terms. Only the pressure and the drag terms interact.
  • No slip boundary condition (velocity = 0 and the walls)
  • Parabolic flow profile
  • Flow \(\propto pr^4\)

Response of osteocyte to fluid flow

  • Bone structure
  • Cortical bone
  • Trabecular bone (spongy)
  • Bone marrow
  • Osteons: Concentric circles
  • Osteocytes: with channels connecting each other and the blood vessels
  • Osteoclast: A special macrophage removing old bones
  • Osteoblast: will become osteocytes once the surrounding mineralized

  • Fluid mechanics in the bone

  • Tension, compression
  • Difference in hydrostatic pressure
  • Fluid flow and shear stress on the osteocytes
  • Piezoelectric collagen I ?
  • Stimulates osteocytes to produce more osteopontin

  • How to separate flow shear stress and convection of nutrients

  • Increase the flow shear stress by adding the viscosity (add dextran)

Divided by volume element (\(dx_1dx_2dx_3\)), dimension = force density:
\(\(\rho \frac{dv}{dt} = \rho Y - \nabla p + \eta \nabla^2 v\)\)

We get the Navier-Stoke equation:
\(\(\rho (\frac{\partial v}{\partial t} + (v \cdot \nabla) v) = \rho Y - \nabla p + \eta \nabla^2 v\)\)

Where
* \(\frac{\partial v}{\partial t}\): Solid acceleration
* \(v \cdot \nabla v\): fluid convective term
* Y: body force
* \(\nabla p\): pressure term
* \(\eta \nabla^2 v\): viscous term

Microscopic model of fluid friction

Velocity gradient across adjacent layers plus particle diffusion => momentum exchange and frictional drag

Particle drift and friction law (in small Re)

\(F = \zeta v\), \(\zeta\): drag coefficient
* Stokes law for spherical objects: \(\zeta = 6 \pi \eta R\)
* Electrophoresis: \(F = q\epsilon = \zeta v\)
* Sedimentation of colloid particles: \(f_s = -m_{net}g = \zeta v\)

Reynolds number (Re)

  • Dimensionless property
  • Fluid runs around a particle
  • acceleration = \(\frac{v^2}{R}\)
  • viscous force = \(\eta\frac{v}{R^2}\)
  • Substitute into Navier-Stoke equation: \(\frac{\rho v R}{\eta} = \frac{R^2}{\eta v}f_{ext} + 1\)

Large Re

  • Dominated by inertia
  • Fluid is mixed, turbulent flow with vortices
  • Examples: human in water, rockets
  • \(f_{ext} \approx \rho \frac{v^2}{R}\)

Small Re

  • \(\frac{\rho v R}{\eta} \ll 1\), \(f_{ext} \approx \frac{\eta v}{R^2}\), drifting velocity proportional to drag force.
  • Dominated by viscous drag, laminar flow (Re < 10)
  • Acceleration and inertia term extremely small, time reversible
  • Examples: bacteria in water, dyes in corn syrup
  • Reciprocal motion does not work (due to time reversibility)
  • Periodic movement (cilia) and rotational movement (flagella) break the symmetry of the drag coef. (and thus the drag force) and create propulsion.

Motion of fluid between parallel plates in small Re

  • Applying the Navier-Stoke equation, ignoring the acceleration and body force terms. Only the pressure and the drag terms interact.
  • No slip boundary condition (velocity = 0 and the walls)
  • Parabolic flow profile
  • Flow \(\propto pr^4\)

Response of osteocyte to fluid flow

  • Bone structure
  • Cordial bone
  • Trabecular bone (spongy)
  • Bone marrow
  • Osteons: Concentric circles
  • Osteocytes: with channels connecting each other and the blood vessels
  • Osteoclast: A special macrophage removing old bones
  • Osteoblast: will become osteocytes once the surrounding mineralized

  • Fluid mechanics in the bone

  • Tension, compression
  • Difference in hydrostatic pressure
  • Fluid flow and shear stress on the osteocytes
  • Piezoelectric collagen I ?
  • Stimulates osteocytes to produce more osteopontin

  • How to separate flow shear stress and convection of nutrients

  • Increase the flow shear stress by adding the viscosity (add dextran)

Super-resolution microscopy techniques

Course notes of Super-resolution microscopy.

Course information

  • Lecturer: Tony Yang
  • Time: 789 (W)
  • Location: MD225
  • Reference books
  • Bahaa Saleh and Malvin Teich, Fundamental of Photonics, 2nd ed. Wiley, New York, 2007.
  • Erfle, Holger, Super-Resolution Microscopy: Methods and Protocols, Humana Press, 2017
  • Grading:
  • Participation in classroom discussions: 25%
  • Midterm: 30%
  • Term paper: 45%

Photonics

Ray optics

When length scale of the instrument is much larger than that of light wavelength.
Neither wave properties (diffraction, interference) nor photon ones.
Optical path length = line integral from one point to another, with respect to refraction index (n)
\(\(\int_A^B n(r)ds\)\)

Fermat's principle

Light tries to tale minimal travel time
Snell's law:
\(\(n_1sin\theta_1 = n_2sin\theta_2\)\)

Huygen's principle

Wavefront and wavelets: explains refraction, diffraction and interference

Total internal reflection

Dense material to loose material.
With little energy loss (<0.1%) as evanescent wave, penetration depth about 100-200 nm.
When incidence angle \(\theta >\) the critical angle \(\theta_c = sin^{-1}(\frac{n_2}{n_1})\)
Used in fiber optics and superresolution microscope.

Negative-index metamaterials

\[ n = \left( \frac{\epsilon\mu}{\epsilon_0\mu_0} \right)^{1/2} \in \mathbb{C} \]

Superlensing breaking through the diffraction limit.
n is frequency-dependent

Spherical mirrors

  • Approximation of the 'perfect' parabolic mirror at small angles
  • For small angles (paraaxial) \(\theta \approx sin(\theta) \approx tan(\theta)\)
\[ \begin{aligned} \frac{1}{z_1} &+ \frac{1}{z_2} = \frac{1}{f} \cr f &= R/2 \cr m &= \frac{y_2}{y_1} = \frac{-z_2}{z_1} \end{aligned} \]

Spherical boundaries of different refractive indices

\[ \begin{aligned} \frac{n_1}{z_1} &+ \frac{n_2}{z_2} = \frac{n_2 - n_1}{R} \cr y_2 &= \frac{-n_1}{n_2} \frac{z_2}{z_1} y_1 \end{aligned} \]

Thin lens from two spherical surfaces

\[ \begin{aligned} \theta_3 &= \theta_1 - y / f \cr \frac{1}{f} &= (n_2-n_1)(\frac{1}{R_1} - \frac{1}{R_2}) \cr \frac{1}{z_1} &+ \frac{1}{z_2} = \frac{1}{f} \cr m &= \frac{-y_2}{y_1} = \frac{-z_2}{z_1} \end{aligned} \]

Transformation in matrix forms

Light rays as 2-component vector
Components as 2 by 2 matrix.

Wave optics

Considerations

  • Diffraction (+), polarization (-), Fraunhofer (+), Fresnal (+)
  • Maxwell equations: EM (E and B) vector fields
  • optic phase is the central quantity.
  • phase match at boundaries

Wave equation

2nd derivative of space proportional to that of time
u: space; t: time; v: phase velocity; k: wave number; \(\omega\): angular frequency; n: refractive index

\[ \begin{aligned} \nabla^2u &= \frac{1}{v^2}\frac{\partial^2u}{\partial t^2} \cr k &= \frac{2\pi}{\lambda} \cr \omega &= 2\pi v \cr v &= \frac{c}{n} \end{aligned} \]
  • Linear equations => superposition possible
  • Complex notation by Euler's formula
    a: amplitude, ϕ(r): phase, ω: angular velocity
    periodic both in time and space
    the real part = physical quantity
\[ U (r,t) = a(r)exp(i\phi(r))exp(i\omega t) \]

Helmholtz equations

  • regardless of time
\[ \begin{aligned} U (r) = a(r)exp(i\phi(r)) \cr \nabla^2U (r) + k^2U (r) = 0 \cr \end{aligned} \]

Waterfronts

  • surfaces of constant phase (等相位面)

  • Plane waves in media with refractive index n
    $$
    \begin{aligned}
    k &= k0n \cr
    λ &= \frac{λ
    0}{n}
    \end{aligned}
    $$

Bigger the n, higher in spatial frequency (shorter in wavelength). The same time frequency.

Spherical waves

\[ \begin{aligned} U (r) &= \frac{A}{r}exp(-ikr) \cr r &= \sqrt{x^2 + y^2 + z^2} \end{aligned} \]

Fresnel Approximation: Paraaxial (\(z^2 >> (x^2 + y^2)\)): Spherical -> paraboloidal -> planar wave

\[ \begin{aligned} U (r) &= \frac{A}{z}exp(-ikz) exp \left( -ik \frac{x^2 + y^2}{z} \right) \cr \nabla^2U (r) &+ k^2U (r) = 0 \cr \end{aligned} \]

Reflection, Refraction

  • Results are similar to ray optics at planar surfaces for planar waves
  • Plane wave through thin lens -> paraboloidal waves
  • Intensity = \(| U(r) |^2\)

Interference

By superposition of two rays

\[I = | U(r) |^2 = I_1 + I_2 + 2\sqrt{I_1I_2} cosΔϕ\]

Paraxial waves

  • Slowly varying envelope: slow change in amplitude
  • Paraxial Helmholtz equation
\[ ∇_T^2 A(r) = 2ik\frac{∂A}{∂z} \]

Gaussian beam

https://en.wikipedia.org/wiki/Gaussian_beam

\[ \begin{aligned} A(r) &= \frac{A_1}{q(z)}exp \left( \frac{-ik(x^2 + y^2)}{2q(z)} \right) \cr q(z) &= z + iz_0 \end{aligned} \]
  • q(z): q-parameter
  • A solution to the paraxial Helmholtz equation
  • The best we can do in real situations
  • Cannot avoid spreading, but Gaussian beam's angular divergence in minimal.
  • Inside the waist (the narrowest part of the beam) is similar to planar wave
  • Long wavelength and thin beam waist -> more divergence
  • Depth of focus
\[ \begin{aligned} W(z) &= W_0 \sqrt{1 + (z / z_0)^2} \cr DOF &= 2z_0 = 2 \frac{W_0^2 \pi}{\lambda} \end{aligned} \]
  • Calculate the divergence by the q parameter and complex distance
\[ \begin{aligned} q_2 &= q_1 + d \cr \frac{1}{q_1} &= \frac{1}{R_1} - \frac{iλ}{πW_1^2} \cr \frac{1}{q_2} &= \frac{1}{R_2} - \frac{iλ}{πW_2^2} \cr \end{aligned} \]
  • Beam quality: M-square factor >=1, the smaller the better.

  • Through thin lens

  • Change in phase -> wavefront is bent
  • Radius is unchanged
  • Not focused on a single point like in ray optics

Higher order modes (TEM (l,m))

  • Laguerre-Gaussian beams -> important in superresolution.

Fourier Optics

  • Any wave = sum (superpositions) of plane waves
  • Important properties: angles and spatial frequencies
  • Optical components: linear functions with frequency response
  • Impulse (with all frequencies) => Impulse response function
  • Inputs of various freq. => Transfer function

Propagation of light in free space

Angles => spatial frequencies in the x-y plane

\[U(x,y,z) = A \cdot exp(-j(k_xx+k_yy+k_zz))\]

Where
* wave vector \(\textbf{k} = (k_x, k_y, k_z)\)
* wave length \(\lambda\)
* wave number \(k = \sqrt{k_x^2 + k_y^2 + k_z^2} = \frac{2\pi}{\lambda}\)

For paraxial waves

\[\theta_x = sin^{-1}(\lambda\nu_x) \approx \lambda\nu_x\]
\[\theta_y = sin^{-1}(\lambda\nu_y) \approx \lambda\nu_y\]

Optical Fourier Transform

  • Spatial frequencies at different angles
  • A lens could do Fourier transform at the focal plane

Fraunhofer Far Field Approximation

  • Far field: \(d \gg \frac{b^2}{\lambda} , \frac{a^2}{\lambda}\)
  • Near field (\(d \approx \lambda\)): superresolution (~nm) due to little distortion
  • Far field image (diffraction pattern) is the Fourier transform of the original image
  • Smaller the scale (higher spatial frequencies), larger the distortion (wider aura)
  • Diffraction: is everywhere, but best demonstrated in the pinhole(aperture) experiment
Rectangular aperture
  • expressed as cardinal sine (sinc) function
  • Angular divergence (first zero value): \(\theta_x = \frac{\lambda}{D_x}\)
Circular aperture
  • Bessel function, Airy pattern
  • \(\theta = 1.22\frac{\lambda}{D}\): angle of the Airy disk
  • Focused optical beam through an aperture: \(\theta = 1.22\frac{f\lambda}{D}\):
4-F imaging system
  • Original image -> lens (FT) -> (spatial frequencies) -> lens(iFT) -> perfect image (in theory)
  • Filtering of higher spatial frequencies: less detailed image, less noise
  • Spatial filtering: cleaning laser beams

Transfer function of free space

  • Higher freq. => real exponent => attenuate rapidly (evanescent wave)

Polarization

  • Electric-field as a vector
  • Polarization ellipse: looking at the xy plane from the z axis.
  • Phase difference: \(\varphi\)
  • Linearly polarized: $\varphi = 0 $ or \(\pi\)
  • Circularly polarized $\varphi = \pm \pi /2 $ and \(a_x = a_y\)$
  • Linear polarizer : only passed a certain linearly polarized light
  • Wave retarder: changes \(\varphi\) to change polarization pattern

Fiber optics

  • Low-loss
  • Light could bend inside it
  • Single-mode fiber (small core): Gaussian wave only
  • Multimode fiber (larger core): higher order light source
  • Relation to numerical aperture (NA)
  • Acceptance angle of the fiber: \(\theta_a = sin^{-1}(NA)\)
  • Larger NA: more higher order information, more noise
  • Smaller NA: \(V = 2\pi\frac{a}{\lambda_0}NA < 2.405\). Gaussian wave only
  • Polarization-maintaining fibers

Quantum optics

  • Quantum electrodynamics (QED)
  • Energy carried by a photon: \(E = h\nu = \hbar\omega\)
  • Typical light source: more than trillion photons per second
  • \(E (eV) = \frac{1.24}{\lambda_0(\mu m)}\)
  • Momentum carried by a photon: \(p = hk\)
  • Probability of photon position or the squared magnitude of the SWE (individual behavior) is directly proportional to light intensity (group behavior)
  • At smaller n : the interference pattern looks random (randomness of photon flow)
  • At larger n: the interference pattern is more similar to what we see in the macroscale
  • Poisson distribution (discrete randomness with rate = photon flux)
  • mean = variance
  • SNR = mean^2 / variance = mean

Schrodinger wave equation (SWE)

  • Similar to solve for eigenvalues => discrete solutions => quantized energy levels
  • Particle in a well / atoms with a single electron => standing wave (discrete solutions)
  • Multi-electron: no analytical solutions

Photons and matter

  • Photon absorption and release: jumping in energy levels
  • Rotational : microwave to far-infrared
  • Vibrational : IR e.g. CO2 laser
  • Electronic : visible to UV
  • Photon absorption: electron jump up in energy level
  • Photon emission: Spontaneous vs stimulated (laser)

Occupation of energy levels

  • Boltzmann distribution
  • Pumping energy: population inversion
  • Laser stimulated emission

Luminescence

  • Cathodo- (CRT)
  • Sono- (ultrasound)
  • Chemi- (lightsticks)
  • Bio- (firefly)
  • Electro- (LED)
  • Photo- (Laser, Fluorescence, Phosphorescence)

Photoluminescence

  • In fact emitting a range of wavelengths (many sub-energy levels)
  • Fluorescence (spin-allowed, shorter lifetime) vs phosphorescence (spin-forbidden, longer lifetime)

Multiphoton

  • Absorption of 2 lower energy photons => emission of 1 higher energy photon
  • Multiphoton fluorescence

Light scattering

  • Photoluminescence: real excited states (resonant)
  • Scattering: virtual excited states (non-resonant)
  • Rayleigh scattering: same energy (elastic)
    • Particle size much smaller than the photon wavelength
    • Reason behind blue sky
    • vs Mie scattering particle size comparable to photon wavelength
  • Raman scattering
    • Stokes: Loss energy
    • Ani-Stokes: Gain energy
    • Molecular signature
  • Brillouin: acoustic

Stimulated Raman scattering (SRS)

  • Label-free microscopy

Eyes

  • 380 nm ~ 710 nm
  • threshold of vision: 10 photons (a cluster of rod cells)
  • Logarithmic perception: Weber-Fechner Law (like hearing)
  • Single lens: spherical and chromatic aberration inevitable
  • Astigmatism: directional aberration
  • Pupil (Aperture)
  • Small pupil: less spherical and chromatic aberration (paraxial), less brightness and more diffraction
  • Large pupil: more brightness, more spherical and chromatic aberration
  • Optimum: 3mm
  • Viewing angle: the perceived size

Length scale of microscopes

  • Resolution limit of regular light microscope: 200nm
  • Clear organelles structure: 30nm

Geometrical optics of a thin lens

  • Lens equation: \(\frac{1}{f} = \frac{1}{a} + \frac{1}{b}\)
  • Magnification factor: \(M = \frac{b}{a}\)
  • Virtual image: divergent rays forming a real image on the retina due to the lens
  • Compound microscope: M = \(M_{obj}\) * \(M_{eye}\)
    *

Infinity-corrected microscope

  • Object on the focal plane of the objective lens
  • Parallel rays from the objective is converged by the tube lens
  • Magnification: reference tube length (160-200mm) divided by the focal length of the objective
  • shorter focal length = larger magnification
  • 1.5mm => 100x

Microscope anatomy and design

  • The most important: resolving power (distinguish between two points) = numerical aperture (NA)
  • 2nd: Contrast : object v.s. background (noise) signal strength
  • 3rd: Magnification: \(M_{obj}\) * \(M_{eye}\)

Anatomy

  • Light source: Koehler illumination to see the sample, not the light source
  • Diaphragm
  • Field: field of view
  • Condenser / aperture: resolution + brightness (open, larger angle) vs contrast + depth of view (closed, smaller angle)
  • Condenser
  • Objective
  • Eyepiece / camera

Different types of microscopic design

  • Transmitted light
  • Bright field
  • Dark field
  • Phase contrast
  • DIC
  • Polarization
  • Reflected light: objective = condenser (most common in modern microscopes)
  • Fluorescence
  • Upright vs inverted

Optical aberrations

Spherical aberrations

  • Paraxial and peripheral rays have different focal planes
  • Asymmetry in unfocused images
  • Corrected by
  • 2 plano-convex lenses facing each other
  • meniscus lenses
  • lenses with different radii
  • doubling with another lens with opposing degree of spherical aberration

Chromatic aberrations

  • Different refractive index for different wavelengths
  • Corrected by
  • Doubling with a lens with a different material and shape
  • Achromat: corrected for 2 wavelengths
  • Apochromatic: corrected for at least 3 wavelengths
  • Flunar (semi-apochromatic)

Astigmatism

  • Different directional plane, different foci
  • Not in perfect alignment (off-axis) / curvature of field
  • Esp. in high NA lens
  • Caused / corrected vy a plano-cylindrical lens

Coma

  • Comet tail
  • Off-axis aberration (misalignment)

Field Curvature

  • Thin flat object -> image with edges curving towards lens
  • Cause: difference of lengths of light paths
  • Esp. in high NA
  • Planar view objectives correct this

Distortion

  • non-linear aberrations
  • different magnification across the field of view

Transverse chromatic aberration

  • Chromatic difference of magnification

Testing for aberrations

  • Color shift between channels
  • Fluorescent beads

Anti-vibration tables

  • Vibrations
  • Ground (low freq. 0.1 - 5 Hz)
  • Acoustic
  • Direct vibration from the components (10-100 Hz)
  • Solution:
  • Air isolators
  • Active control

Ergonomics

  • Protect scientists' eyes, neck, and shoulder

Objective

  • The most important part in a microscope

Objective class

  • More corrections, more expensive
  • Achromat: 1
  • Semi-apochromatic: 2-3
  • Apochromatic: 5-10 cost

Labels on the objective

  • numerical aperture (NA): resolving power (collected photons)
  • magnification (e.g. 10x): field of view
  • color correction: Achromat / Semi-apochromat (Neofluar / fluotar) / Apochromat
  • immersion: air / water / oil
  • free working distance
  • cover slip thickness (usually 170 μm)

Numerical aperture

NA = nsinα

Oil immersion
  • no air gap causing total internal reflection (loss of photon information)
  • NA up to 1.4
Abbe's law

Lateral spatial resolution (xy):

\[ d \approx \frac{\lambda}{2} \]

Axial spatial resolution (z): usually worse (~700 nm)

Depth of field vs depth of focus

  • Depth of field: moving the object
  • Depth of focus: moving the image plane

Brightness

  • More NA, brighter
  • More mag, dimmer
  • Best brightness: NA 1.4 and mag 40x

Illumination (lamp)

  • Tungsten: 300-1500nm (reddish), dimmer
  • Tungsten-halogen lamp: stable spectrum and bright
  • Mercury lamp: 5 spectral peaks, 200hrs
  • Meta-halide lamp: same spectral properties as the mercury lamp, lasts 2000 hrs
  • Xenon lamp: more constant illumination across wavelengths, 1000 hrs
  • LED: small, stable, efficient, intense, multiple colors, quick to switch, long-lasting (10000 hrs)

Filter

  • Absorption vs interference (modern)
  • Neutral-density (equal) vs color filters (specific wavelengths)

Resolution

  • Rayleigh's criterion: \(d = \frac{0.61 \lambda}{NA}\)
  • Sparrow's (astrophysics): \(d = \frac{0.47 \lambda}{NA}\)
  • Abbe's: \(d = \frac{0.5 \lambda}{NA}\)
  • Interpreted as spatial freq. response of a transfer function (low-pass filter)

Contrast

  • Signal strength of object vs background
  • Human eye limit: 2% (dynamic range = 50x, 5-6 bits)
  • Improved by staining (including fluorescence) and lighting techniques
Interactions with the specimen
  • Absorption / transmission / reflection: produce contrast (amplitude objects)
  • scattering (irregular) / diffraction : edge contrast enhancement
  • Refraction: difference in refractive index (n)
  • Polarization: DIC (differential interference contrast) with two coherent beam and Wollaston prisms
  • Phase change: phase contrast (shifting phases)/ phase interference
  • Fluorescence: achieves superresolution
  • Absorption and release of photons (time scale of 1fs to 1ns)
  • Great resolution, contrast, sensitivity and specificity
  • Live cell imaging
  • Various labels (with different wavelength)
Bright vs Dark field
  • Bright field : darker specimen than the background, lower contrast
  • Dark field (by oblique illumination): brighter specimen than the background, higher contrast
  • transmitted light fall outside the objective, scattered light only

Fluorescence microscopy

  • Finally the main point of superresolution microscopy
  • high-contrast (clean labeling)
  • sensitive: single molecule imaging (single photon)
  • specific: labeling agent dependent
  • multiple labeling at once with different wavelength
  • versatile
  • Live imaging: cell metabolism, protein kinetics
  • Molecular interaction: FRET
  • Relatively cheap and safe

Quantum processes

  • Driving photon: kick electrons to an upper electronic state
  • Fluorescence: electrons falling back to the ground state
  • Some relaxation by vibrational energy levels (Strokes shift), or non-photogenic energy shifts
  • Absorb / emit a range of wavelengths with abs. peak
  • emitted wavelength is usually longer than absorbed
  • Time scale: 1fs to 1ns
  • Phosphorescence: singlet -> triplet -> singlet electron (spin-forbidden), much longer time scale (in seconds)

Fluorophore

  • Conjugated pi bonds providing the electronic energy levels from UV to IR
  • Fluorescence lifetime:depend on the type of fluorophores. e.g. FLIM
  • Photobleaching: irreversibly destroyed after 10000 - 100000 absorption/emission cycles
  • FRAP: measuring diffusion rate
  • Quenching / blinking
  • Reversible suppression of emission
  • PALM / STORM (single molecule microscopy)
  • Emission tail: increased crosstalk to others
  • Efficiency (Brightness): \(\Phi\epsilon_{max}\)
  • Quantum yield (Φ)
  • Molar extinction coef. (\(\epsilon_{max}\))
  • The best one: quantum dots (also the most versatile)

Fluorescence microscope

  • epi illumination is more suitable for biology
  • Object = condenser
  • Increased contrast (reduced background)
  • transmitted light are outside field of view (only see fluorescence photons)
  • Filter sets: one for excitation + one for emission + one dichromic mirror
  • May need to design excitation / emission bands for multiple fluorophores

Fluorophores

  • Smaller = better spatial resolution
  • May disrupt normal cellular function
  • Labels: organic dye (1 nm), protein (3 nm), quantum dots (10 nm), gold particles (100 nm)
  • Specificity molecules: Antibody (15 nm), Fab, Streptavidin, Nanobody (3 nm)
  • May have secondary ones (making the entire dot even bigger)
  • Absorption / emission wavelengths
  • Stokes shift
  • Molar extinction coefficient / quantum yield = brightness
  • Toxicity
  • Saturation
  • Environment (pH)

Fluorescent protein: e.g. GFP

  • Introduced by transfection: not always successful (transfection and cell viability)
  • Others: CFP (cyan), mCherry, mOrange, ...
Photoactive fluorescent protein e.g. mCherry
  • State transitions by activating photons
  • photoactivable
  • photoconvertible
  • photoswitchable

Quantum dots

  • Bright and resistant to photobleaching
  • Blinking under continuous activation
  • Bigger (10 nm)
  • Broad excitation and narrow emission spectra

Autofluoresence

  • e.g. Tryptophan, NAD(P)(H) in the cell
  • Label-free imaging
  • Background

Issues of fluorescence microscopy

  • Blurring
  • Bleaching
  • Bleed-through

Blurring in fluorescence microscopy

  • Limited depth of field compared to specimen thickness
  • Reduce the SNR (out-of-focus blurred images)
  • Solution: optical sectioning

Confocal

  • Pinhole: block out-of-focus light. Aperture in Airy Units (AU), optimal is 1
  • Raster scanning with mirrors and a laser: point-by-point
  • Phototoxicity issues: Time-lapse possible, but even higher phototoxicity
  • photon detection
  • PMT: high gain, low quantum efficiency(QE) (1/8)
  • CCD: higher QE (65%), higher background noise (lower SNR)
  • ScMOS: QE~95%
  • Avalanche photodiode (APD): QE~80%, higher SNR
  • Imaging parameters: no absolute rules, always trade-offs
  • Resolution: slightly better than wide field (1.4x spatial freq., by FWHM of the PSF)
Spinning disc
  • Faster imaging (parallel scans) and lower phototoxicity
  • Spinning microlens array + pinholes
  • Thinner optical slice of 800nm (traditional confocal: 1000nm)

Point spread function

  • Point -> psf -> Airy disk
  • After Fourier transform: Optical transfer function (OTF)

Convolution

  • Lens: finite aperture, could not capture higher spatial frequencies of the object
  • A way to understand and calculate blurring. Image = object * psf
  • Simplified to multiplication in the frequency domain by Fourier transform
  • Optical transfer function (OTF) = F{PSF}

Point spread function (PSF)

  • Hour-glass shape (sharper xy and less z resolution) due to the orientation of the objective
  • Confocal pinhole open at 1 AU: less spreading of the PSF

Deconvolution

  • Computational iterative process: deblurring, restorative
  • Only makes good image better

Total Internal Reflection Fluorescence (TIRF)

  • An illumination method for bottom 200nm (extent of evanescent field)
  • Improves axial resolution (up to ~100 nm) and contrast

Colocalization

  • spatial overlap between two (or more) different fluorescent labels
  • Pearson correlation coefficient
  • Spatial colocalization doe snot mean interaction (just the same pixel: co-occurrence)
  • Software analysis: ImageJ
  • Mander's Colocalization coefficients
  • Noise leads to underestimation of colocalization

Spectral Overlap

  • Bleed-through
  • Crossover
  • Cross-talk
  • Managed by tweaking light sources and filters

Resolution limit

  • Abide to physical laws
  • Abbe limit: 0.5 * wavelength / numerical aperture, from Fourier optics
  • Electron microscope (EM): 2nm. But cells need to be fixed and processed
  • Fluorescent microscopy: 200 nm. Multiple labeling methods. Multiple strategies to enhance the resolution.

Super-resolution light microscopy (SRLM) (precisely nanoscopy)

  • Cost, specimen prep, and operational complexity are in the middle between confocal and EM.

Near field microscopy

  • Evanescent waves (before the light diffracts)
  • 5-10 nm axial resolution, 30-100 nm lateral resolution
  • Practically zero working distance

4-pi microscopy

  • Two opposing objectives improves z resolution
  • Technical difficulties

PALM, STED, STROM

  • Using non-linear properties of the fluorophores (turing they on / off)

Stimulated emission depletion microscopy (STED)

  • Donut-shaped induced depletion laser (high power)
  • At the tail of emission spectrum to avoid cross-talk
  • Donut-shape via a vortex phase plate
  • Diffraction-limited. But combining another diffraction-limited excitation laser to achieve super-resolution
  • Higher labels and samples preparation requirements, and optical alignment (vibration sensitive)
  • Depletion efficiency: \(p_{STED} = exp(-\frac{I_{STED}}{I_{sat}})\)
  • Resolution by the factor of \(\sqrt{1 + \frac{I_{STED}}{I_{sat}}}\)
  • More \(I_{STED}\), more resolution, but more power (photobleaching)
  • Implementation: Pulsed, continuous wave, gated
  • Pulsed: synchronization challenges
  • continuous wave (CW): high background noises
  • Gated: lower background noises than CW, easier than pulsed, mainstream
  • Protected STED: less photobleaching using photoswitchable dyes
  • Long-time observation
  • STED with 4-pi: improved axial(z) resolution by another phase plate
Fluorescence probes
  • More restricted
  • Two color: Long Stoke shift + normal Stoke shift dyes

Localization microscopy

  • Tracking the particles central positions from reversing the point spread function (e.g. fitting the Gaussian distribution). Only possible with sparse points, thus stochastic.
  • Reconstruct the whole image from a series of sparse excited dyes.
  • Switching-based separation is the mainstream of sparse activation

Photoactivated localization microscopy (PALM)

  • Less convenient than dSTORM.

Stochastic optical reconstruction microscopy (STORM)

  • Direct STORM (dSTORM) currently
  • Readily implemented on regular wide-field microscopes.
  • Selected dye (esp. Alexa 647) and imaging buffers.
  • Cameras instead of PMTs to see the whole field.
  • Gaussian distributions fitting the intensity of dots to calculate the centroid point.
  • Labels could have an impact on the measured length (e.g. primary and secondary antibodies)
  • Localization precision: more photons, less uncertainty (more precision, up to 5-20 nm), more frames (time) required
  • Precision estimation is a statistical issue.
  • FWHM = 2.35 uncertainty (\(\sigma_{loc}\))
  • Imaging buffer: together with activation laser, determines the state (active, vs dark) of dyes
  • More fluorophores could be reactivated when the signal gets too weak by the activation laser (typically UV).
    But not too strong to ruin the single molecule signals.
  • To avoid cross-talk (activating multiple types of dyes at once) and photobleaching by stronger activation photons,
    starting activating with far-red (long-wavelength) dyes
  • Irradiation density
  • Too high: no single molecule anymore, poor localization quality
  • Too low: more time required and more background noise
  • Threshold for signal detection and rejection criteria
  • Too strict: wasted the real signal
  • Too loose: more noise
  • Too many photons at one time indicate multiple molecules = false positive, poorly localized
  • Structural averaging: reducing noise by a series of images (time info. -> spatial info.)
  • Pair correlation analysis and molecular cluster analysis (not randomly distributed particles)
  • Single molecule tracking
  • 3D localization by encoding z information into the optic system
  • Bi-plane
  • Dual helix
  • Astigmatism

Structured illumination microscopy

  • SIM for short
  • Grating pattern for structured illumination (stripes) encoding high frequency information
  • Indicated by Fourier optics (extension of optical transfer function (OTF))
  • Multiple images by superimposing illumination stripes in different angles
  • Increasing resolving power by 2x
  • Even more resolution improvement by non-linear optics (saturation SIM)

Light sheet microscopy

  • Orthogonal illumination
  • Improved z axis and optical section
  • Low laser intensity for live cell imaging, minimal phototoxicity
  • Scanning beam / lattice for even illumination and more z resolution

Change user directory

Use sudo and at to schedule the usermod command, which changes the user's home dir.

sudo at "now +5 minutes"  # Run the following commands in 5 minutes

In the at interface

pkill -u UID            # kill user processes
usermod -m -d /new/home # Change user home dir (-d) and move (-m) the content into the new folder

Ctrl+D to exit the at interface. Logout, wait 10 minutes, and login.

See also

Make IJulia use Python package provided by CondaPkg.jl

Set the JUPYTER environment variable to CondaPkg.jl-provided jupyter and IJulia.jl will take it to start the kernel. 1

using CondaPkg
CondaPkg.add("jupyter")
ENV["JUPYTER"] = CondaPkg.which("jupyter")

using Pkg
Pkg.add("IJulia") # if IJulia was not yet installed
Pkg.build("IJulia") # To change the configuration of existing installation

heredoc: Passing multiple lines of string

Use heredoc to pass the string as-is between two delimiters (e.g. EOF)

cat << "EOF" >> ~/.xprofile
# ~/.xprofile
export GTK_IM_MODULE=ibus
export QT_IM_MODULE=ibus
export XMODIFIERS=@im=ibus
ibus-daemon -drx
EOF

Will append the following lines to ~/.xprofile:

.xprofile
export GTK_IM_MODULE=ibus
export QT_IM_MODULE=ibus
export XMODIFIERS=@im=ibus
ibus-daemon -drx