flowchart LR
E[Empirical Data] --> L[Loss Function]
S[Simulation] --> O[Observation Model]
O --> L
L --> G[Gradients]
G --> S
Model Fitting
Fit simulation parameters to empirical data using differentiable simulation and gradient-based optimization.
Workflow
- Simulate with current parameters
- Observe — compute BOLD, FC, or other derived signals
- Compare — evaluate loss against empirical target
- Update — use JAX autodiff gradients to improve parameters
Loss Function
A loss function defines the optimization objective:
optimization:
loss:
name: fc_loss
equation:
rhs: "1 - correlation(simulated_fc, empirical_fc)"
aggregate:
over: regions
reduction: mean
free_parameters:
- name: G
domain: {lo: 0.1, hi: 5.0}
- name: speed
domain: {lo: 1.0, hi: 30.0}
n_epochs: 200
learning_rate: 0.01End-to-End Differentiability
Because Network, TimeSeries, and simulation kernels are JAX pytree-registered, gradients flow through the entire pipeline:
\[\nabla_\theta \mathcal{L} = \nabla_\theta \left[ 1 - \text{corr}\left( \text{FC}(\text{BOLD}(\text{simulate}(\theta))), \text{FC}_{\text{emp}} \right) \right]\]
YAML Specification
A complete fitting experiment:
dynamics:
name: ReducedWongWang
# ... model definition
network:
label: Schaefer100
# ... connectome
observations:
- name: bold
state_variable: S
pipeline:
- function: bold_balloon_windkessel
parameters: {TR: {value: 2000}}
- name: fc
pipeline:
- function: numpy.corrcoef
source_observations: [bold]
optimization:
loss:
equation: {rhs: "1 - correlation(fc, empirical_fc)"}
free_parameters:
- name: G
domain: {lo: 0.1, hi: 5.0}
n_epochs: 200
learning_rate: 0.01Results
result = exp.run("tvboptim")
# Access optimization trajectory
result.optimization.loss_history
result.optimization.fitted_params
# Compare fitted vs empirical FC
result.integration.observations.fcSee Also
- Observation Models — define BOLD, FC pipelines
- Algorithms — FIC/EIB tuning before optimization
- Parameter Exploration — find good starting points
- RWW tvboptim workflow — complete FC-matching example