VAE (Variational Autoencoder)
bnode_core.nn.vae.vae_architecture
Variational Autoencoder (VAE) architecture for timeseries reconstruction.
This module implements a Variational Autoencoder with parameter conditioning for timeseries data (states and outputs). The architecture supports multiple modes:
- Standard VAE: Encoder-Decoder with latent space
- PELS-VAE: Parameter-conditioned VAE with Regressor for mu/logvar prediction
- Feed-forward NN: Direct mapping from parameters to timeseries (bypasses latent space)
The model can reconstruct timeseries from either the encoder (during training) or from the regressor (during testing/prediction), enabling parameter-conditioned generation.
It is intedend to be used for task that model physical parameters --> complete timeseries, e.g. the transient
response of a RC circuit with fixed initial condition on different parameter values R,L,C.
Attention
This documentation is AI generated. Be aware of possible inaccuricies.
Key components:
- Encoder: Maps timeseries (states + outputs) to latent distribution (mu, logvar)
- Decoder: Maps latent samples (and optionally parameters) to reconstructed timeseries
- Regressor: Maps parameters to latent distribution for parameter-conditioned generation
- Normalization: Time-series and parameter normalization layers
Loss function:
loss = mse_loss + beta * kl_loss + regressor_loss
or with capacity scheduling:
loss = mse_loss + gamma * |kl_loss - capacity| + regressor_loss
Encoder
Bases: Module
Encoder network mapping timeseries to latent distribution parameters.
Maps concatenated states and outputs to mean (mu) and log-variance (logvar) of a multivariate Gaussian distribution in latent space. Uses a multi-layer perceptron (MLP) with configurable depth and hidden dimensions.
Architecture:
Flatten -> Linear(n_channels*seq_len, hidden_dim) -> Activation
-> [Linear(hidden_dim, hidden_dim) -> Activation] x (n_layers-2)
-> Linear(hidden_dim, 2*bottleneck_dim) -> Reshape to [mu, logvar]
Attributes:
| Name | Type | Description |
|---|---|---|
bottleneck_dim |
Dimensionality of the latent space. |
|
flatten |
Flattens input timeseries to 1D. |
|
linear |
Sequential MLP mapping flattened input to 2*bottleneck_dim outputs. |
Source code in src/bnode_core/nn/vae/vae_architecture.py
__init__(n_channels: int, seq_len: int, hidden_dim: int, bottleneck_dim: int, activation: nn.Module = nn.ReLU, n_layers: int = 3)
Initialize the Encoder network.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_channels
|
int
|
Number of input channels (states + outputs concatenated). |
required |
seq_len
|
int
|
Length of the timeseries sequence. |
required |
hidden_dim
|
int
|
Number of hidden units in intermediate layers. |
required |
bottleneck_dim
|
int
|
Dimensionality of latent space (output is 2*bottleneck_dim for mu and logvar). |
required |
activation
|
Module
|
Activation function class (default: nn.ReLU). |
ReLU
|
n_layers
|
int
|
Total number of linear layers (minimum 2, includes input and output layers). |
3
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
forward(x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]
Encode timeseries to latent distribution parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input timeseries tensor of shape (batch, n_channels, seq_len). |
required |
Returns:
| Type | Description |
|---|---|
Tuple[Tensor, Tensor]
|
Tuple of (mu, logvar) where:
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
Decoder
Bases: Module
Decoder network for VAE, generating timeseries from latent vectors.
The decoder maps latent vectors (and optionally system parameters) back to timeseries data. It supports three modes:
- Standard VAE: z_latent → timeseries
- PELS-VAE: (z_latent, parameters) → timeseries (params_to_decoder=True)
- Feed-forward: parameters → timeseries (bottleneck_dim=0, params_to_decoder=True)
Architecture: Linear (latent+params → hidden) → MLP → Linear (hidden → n_channels*seq_len) → Reshape
Attributes:
| Name | Type | Description |
|---|---|---|
channels |
Number of output channels in reconstructed timeseries. |
|
seq_len |
Length of output timeseries sequence. |
|
params_to_decoder |
If True, concatenate normalized parameters to latent vector as decoder input. |
|
param_normalization |
Normalization layer for parameters (if params_to_decoder=True). |
|
feed_forward_nn |
If True, decoder operates in feed-forward mode (no latent vector). |
|
linear |
Sequential MLP mapping latent (+ params) to flattened timeseries. |
Source code in src/bnode_core/nn/vae/vae_architecture.py
115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | |
__init__(n_channels: int, seq_len: int, hidden_dim: int, bottleneck_dim: int, activation: nn.Module = nn.ReLU, n_layers: int = 3, params_to_decoder: bool = False, param_dim: Optional[int] = None)
Initialize the Decoder network.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_channels
|
int
|
Number of output channels in reconstructed timeseries. |
required |
seq_len
|
int
|
Length of output timeseries sequence. |
required |
hidden_dim
|
int
|
Number of hidden units in intermediate layers. |
required |
bottleneck_dim
|
int
|
Dimensionality of latent space input (0 for feed-forward mode). |
required |
activation
|
Module
|
Activation function class (default: nn.ReLU). |
ReLU
|
n_layers
|
int
|
Total number of linear layers (minimum 2). |
3
|
params_to_decoder
|
bool
|
If True, concatenate system parameters to latent input (PELS-VAE mode). |
False
|
param_dim
|
Optional[int]
|
Dimensionality of parameter vector (required if params_to_decoder=True). |
None
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
forward(z_latent: torch.Tensor, param: Optional[torch.Tensor] = None) -> torch.Tensor
Decode latent vector (and optionally parameters) to timeseries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z_latent
|
Tensor
|
Latent vector of shape (batch, bottleneck_dim). |
required |
param
|
Optional[Tensor]
|
System parameters of shape (batch, param_dim) (required if params_to_decoder=True). |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Reconstructed timeseries tensor of shape (batch, n_channels, seq_len). |
Source code in src/bnode_core/nn/vae/vae_architecture.py
Regressor
Bases: Module
Regressor network mapping system parameters to latent distribution.
Used in PELS-VAE mode to predict latent distribution parameters (mu, logvar) directly from system parameters, without requiring timeseries input. This allows the VAE to learn relationships between system parameters and latent representations.
Architecture:
Normalize params → Linear (params → hidden) → MLP → Linear (hidden → 2*bottleneck_dim) → Reshape to (mu, logvar)
Attributes:
| Name | Type | Description |
|---|---|---|
bottleneck_dim |
Dimensionality of the latent space. |
|
normalization |
Normalization layer for input parameters. |
|
linear |
Sequential MLP mapping parameters to 2*bottleneck_dim outputs. |
Source code in src/bnode_core/nn/vae/vae_architecture.py
__init__(parameter_dim: int, hidden_dim: int, bottleneck_dim: int, activation: nn.Module = nn.ReLU, n_layers: int = 3)
Initialize the Regressor network.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
parameter_dim
|
int
|
Dimensionality of input parameter vector. |
required |
hidden_dim
|
int
|
Number of hidden units in intermediate layers. |
required |
bottleneck_dim
|
int
|
Dimensionality of latent space (output is 2*bottleneck_dim for mu and logvar). |
required |
activation
|
Module
|
Activation function class (default: nn.ReLU). |
ReLU
|
n_layers
|
int
|
Total number of linear layers (minimum 2). |
3
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
forward(param: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]
Predict latent distribution parameters from system parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
param
|
Tensor
|
System parameters of shape (batch, parameter_dim). |
required |
Returns:
| Type | Description |
|---|---|
Tuple[Tensor, Tensor]
|
Tuple of (mu, logvar) where: - mu: Predicted mean of latent distribution, shape (batch, bottleneck_dim) - logvar: Predicted log-variance of latent distribution, shape (batch, bottleneck_dim) |
Source code in src/bnode_core/nn/vae/vae_architecture.py
VAE
Bases: Module
Variational Autoencoder for timeseries modeling with parameter conditioning.
This class implements three operational modes:
- Standard VAE: Encodes timeseries to latent space, decodes back to timeseries. Uses both Encoder and Regressor to predict latent distributions.
- PELS-VAE (params_to_decoder=True): Decoder receives both latent vector and system parameters, allowing parameter-conditioned reconstruction.
- Feed-forward NN (feed_forward_nn=True): Bypasses latent space entirely, directly mapping parameters to timeseries outputs.
The model jointly trains:
- Encoder: timeseries → (mu_encoder, logvar_encoder)
- Regressor: parameters → (mu_regressor, logvar_regressor)
- Decoder: latent vector (+ params) → timeseries
During training, reconstruction uses Encoder's latent distribution. During prediction, reconstruction uses Regressor's latent distribution.
Attributes:
| Name | Type | Description |
|---|---|---|
n_channels |
Total number of channels (n_states + n_outputs). |
|
n_states |
Number of state channels. |
|
n_outputs |
Number of output channels. |
|
timeseries_normalization |
Normalization layer for timeseries data. |
|
feed_forward_nn |
If True, operates in feed-forward mode (no latent space). |
|
Regressor |
Parameter-to-latent network (if not feed_forward_nn). |
|
Encoder |
Timeseries-to-latent network (if not feed_forward_nn). |
|
Decoder |
Latent-to-timeseries network. |
Source code in src/bnode_core/nn/vae/vae_architecture.py
260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 | |
__init__(n_states: int, n_outputs: int, seq_len: int, parameter_dim: int, hidden_dim: int, bottleneck_dim: int, activation: nn.Module = nn.ReLU, n_layers: int = 3, params_to_decoder: bool = False, feed_forward_nn: bool = False)
Initialize the VAE model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_states
|
int
|
Number of state channels in timeseries. |
required |
n_outputs
|
int
|
Number of output channels in timeseries. |
required |
seq_len
|
int
|
Length of timeseries sequence. |
required |
parameter_dim
|
int
|
Dimensionality of system parameters. |
required |
hidden_dim
|
int
|
Number of hidden units in all sub-networks. |
required |
bottleneck_dim
|
int
|
Dimensionality of latent space. |
required |
activation
|
Module
|
Activation function class (default: nn.ReLU). |
ReLU
|
n_layers
|
int
|
Number of layers in all sub-networks (minimum 2). |
3
|
params_to_decoder
|
bool
|
If True, decoder receives parameters as additional input (PELS-VAE mode). |
False
|
feed_forward_nn
|
bool
|
If True, operate in feed-forward mode without latent space. |
False
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
reparametrize(mu: torch.Tensor, logvar: torch.Tensor) -> torch.Tensor
Apply reparametrization trick to sample from latent distribution.
Samples z ~ N(mu, exp(0.5 * logvar)) using z = mu + eps * std, where eps ~ N(0, I). This allows backpropagation through the sampling operation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mu
|
Tensor
|
Mean of latent distribution, shape (batch, bottleneck_dim). |
required |
logvar
|
Tensor
|
Log-variance of latent distribution, shape (batch, bottleneck_dim). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Sampled latent vector z of shape (batch, bottleneck_dim). |
Source code in src/bnode_core/nn/vae/vae_architecture.py
forward(states: torch.Tensor, outputs: torch.Tensor, params: torch.Tensor, train: bool = True, predict: bool = False, n_passes: int = 1, test_with_zero_eps: bool = False, device: Optional[torch.device] = None) -> Tuple
Perform forward pass through the VAE network.
Three operational modes based on flags:
- Training (train=True, predict=False): Encode timeseries, reconstruct using Encoder's latent distribution
- Testing (train=False, predict=False): Encode timeseries, reconstruct using Regressor's latent distribution
- Prediction (predict=True, train=False): Skip Encoder, reconstruct using Regressor's latent distribution only
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
states
|
Tensor
|
State timeseries of shape (batch, n_states, seq_len). |
required |
outputs
|
Tensor
|
Output timeseries of shape (batch, n_outputs, seq_len). |
required |
params
|
Tensor
|
System parameters of shape (batch, parameter_dim). |
required |
train
|
bool
|
If True, use Encoder's latent distribution for reconstruction. |
True
|
predict
|
bool
|
If True, bypass Encoder and reconstruct from parameters only. |
False
|
n_passes
|
int
|
Number of decoder passes to average (for stochastic predictions). |
1
|
test_with_zero_eps
|
bool
|
If True during testing, use mu directly (zero variance sampling). |
False
|
device
|
Optional[device]
|
Device for tensor operations. |
None
|
Returns:
| Type | Description |
|---|---|
Tuple
|
Tuple of (x, x_hat, states_hat, outputs_hat, mu_encoder, logvar_encoder, mu_regressor, logvar_regressor, retvals_norm) where:
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 | |
predict(param: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]
Generate timeseries predictions from system parameters only.
Convenience method for inference mode. Bypasses Encoder and generates predictions using only Regressor and Decoder.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
param
|
Tensor
|
System parameters of shape (batch, parameter_dim). |
required |
Returns:
| Type | Description |
|---|---|
Tuple[Tensor, Tensor]
|
Same as forward() method with predict=True. |
Source code in src/bnode_core/nn/vae/vae_architecture.py
save(path: Path)
Save model state dictionary to disk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to save the model weights. Parent directories are created if needed. |
required |
Source code in src/bnode_core/nn/vae/vae_architecture.py
load(path: Path, device: Optional[torch.device] = None)
Load model state dictionary from disk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to the saved model weights. |
required |
device
|
Optional[device]
|
Device to map the loaded weights to (e.g., 'cpu', 'cuda'). |
None
|
Source code in src/bnode_core/nn/vae/vae_architecture.py
loss_function(x: torch.Tensor, x_hat: torch.Tensor, mu: torch.Tensor, mu_hat: torch.Tensor, logvar: torch.Tensor, logvar_hat: torch.Tensor, beta: float = 1.0, gamma: float = 1000.0, capacity: Optional[float] = None, reduce: bool = True, device: Optional[torch.device] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
Compute composite loss function for VAE training.
Implements the PELS-VAE loss combining reconstruction, KL divergence, and regressor losses. Supports two modes:
- Standard β-VAE: loss = mse_loss + β * kl_loss + regressor_loss
- Capacity-constrained: loss = mse_loss + γ * |kl_loss - capacity| + regressor_loss
The regressor loss ensures that the Regressor's predicted latent distribution matches the Encoder's latent distribution, enabling parameter-to-latent predictions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Original timeseries (normalized), shape (batch, n_channels, seq_len). |
required |
x_hat
|
Tensor
|
Reconstructed timeseries (normalized), shape (batch, n_channels, seq_len). |
required |
mu
|
Tensor
|
Encoder's latent mean, shape (batch, bottleneck_dim). |
required |
mu_hat
|
Tensor
|
Regressor's latent mean, shape (batch, bottleneck_dim). |
required |
logvar
|
Tensor
|
Encoder's latent log-variance, shape (batch, bottleneck_dim). |
required |
logvar_hat
|
Tensor
|
Regressor's latent log-variance, shape (batch, bottleneck_dim). |
required |
beta
|
float
|
Weight for KL divergence term (ignored if capacity is not None). |
1.0
|
gamma
|
float
|
Weight for capacity constraint term (used only if capacity is not None). |
1000.0
|
capacity
|
Optional[float]
|
Target KL divergence capacity. If None, uses standard β-VAE loss. |
None
|
reduce
|
bool
|
If True, return scalar losses. If False, return per-sample losses. |
True
|
device
|
Optional[device]
|
Device for tensor operations. |
None
|
Returns:
| Type | Description |
|---|---|
Tuple[Tensor, Tensor, Tensor, Tensor]
|
Tuple of (loss, mse_loss, kl_loss, regressor_loss) where:
|
Notes
The capacity constraint encourages the model to use exactly 'capacity' nats of information in the latent space, preventing posterior collapse or over-regularization.
Source code in src/bnode_core/nn/vae/vae_architecture.py
bnode_core.nn.vae.vae_train_test
VAE training and testing pipeline for timeseries modeling.
This module implements the complete training pipeline for Variational Autoencoders (VAE) with parameter conditioning, supporting standard VAE, PELS-VAE, and feed-forward modes.
Attention
This documentation is AI generated. Be aware of possible inaccuricies.
Command-line Usage
The module uses Hydra for configuration management and MLflow for experiment tracking. Training is launched via the command line:
Configuration files are loaded from conf/train_test_vae.yaml (or specified config path).
Configuration
Key configuration parameters (via Hydra config):
dataset_name: Name of HDF5 dataset to loaduse_cuda: Enable CUDA accelerationuse_amp: Enable automatic mixed precision trainingnn_model.network.*: Model architecture parameters (hidden_dim, n_latent, activation, etc.)nn_model.training.*: Training hyperparameters (batch_size, lr, max_epochs, etc.)
Training Workflow
- Load HDF5 dataset and create train/validation/test/common_test dataloaders
- Initialize VAE model with specified architecture
- Initialize normalization layers on full training dataset
-
Train with:
-
Adam optimizer with learning rate scheduling (ReduceLROnPlateau)
- Early stopping monitoring validation loss
- Capacity scheduling for controlled KL divergence growth
- Automatic mixed precision (AMP) support
- Gradient clipping for stability
- Save best model checkpoint based on validation loss
- Evaluate on all dataset splits and save predictions to HDF5 file
- Log all metrics and artifacts to MLflow
Output Files
model.pth: Best model checkpoint (state_dict)dataset_with_predictions.h5: Copy of input dataset with added model predictionsvae_train_test.py,vae_architecture.py: Copies of source files for reproducibility
Key Features
- Multi-pass prediction: Average multiple stochastic forward passes for robust predictions
- Capacity scheduling: Gradually increase KL divergence capacity to prevent posterior collapse
- Early stopping: Monitor validation loss with configurable patience and threshold
- MLflow integration: Automatic logging of metrics, parameters, and artifacts via decorator
- Reproducibility: Saves source code and full configuration to output directory
train(cfg: train_test_config_class) -> float
Train VAE model on timeseries dataset with MLflow tracking.
Complete training pipeline including:
- Dataset loading and preprocessing
- Model initialization and normalization layer setup
- Training loop with early stopping and capacity scheduling
- Evaluation on all dataset splits
- Model checkpoint saving and artifact logging
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cfg
|
train_test_config_class
|
Hydra configuration object containing all training parameters. Key sections: dataset_name, use_cuda, use_amp, nn_model.network, nn_model.training |
required |
Returns:
| Type | Description |
|---|---|
float
|
Final MSE loss on test set (float). |
Notes
- Uses @log_hydra_to_mlflow decorator for automatic MLflow experiment tracking
- Saves best model based on validation loss
- Copies dataset to output directory with added model predictions
- Logs metrics at each epoch: loss, mse_loss, kl_loss, regressor_loss, populated_dims
Source code in src/bnode_core/nn/vae/vae_train_test.py
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 | |
main()
Entry point for VAE training via Hydra CLI.
Initializes Hydra configuration system and launches train with validated config. Auto-detects config directory and uses 'train_test_vae' as the default config name.
This function can be registered in pyproject.toml, enabling command-line execution via a custom script name.
Examples:
Run from command line::
uv run python -m bnode_core.nn.vae.vae_train_test
With config overrides::
uv run python -m bnode_core.nn.vae.vae_train_test \
nn_model.training.lr_start=0.0001 \
dataset_name=my_dataset
Side Effects
- Registers config store with Hydra
- Auto-detects config directory from filepaths
- Launches Hydra-decorated train function