Skip to content

Cache Sessions Guide

Version: 2.5.1

When running multiple MMM configurations sequentially (especially with different numbers of media channels), users encounter PyTensor errors:

RuntimeError: Incompatible Elemwise input shapes [(208,), (208, 3)]

This occurs because PyTensor caches compiled functions with specific tensor shapes. When configuration changes (e.g., 2 channels to 3 channels), PyTensor attempts to reuse cached functions with incompatible shapes.

Traditional workaround: Restart Python kernel between configuration changes.

AMMM solution: Automatic cache isolation per configuration using AMMM_CACHE_MODE.

AMMM’s SessionManager creates separate compilation directories for each unique model configuration based on a configuration hash including:

  • Number and names of media channels
  • Control variables (extra_features_cols)
  • Prophet seasonality features
  • Adstock lag

When configurations switch, AMMM automatically uses different cache directories, preventing shape contamination.

Cache isolation is controlled by the AMMM_CACHE_MODE environment variable.

No cache isolation. Uses PyTensor’s default behaviour.

Terminal window
export AMMM_CACHE_MODE=off
python your_script.py

Use when: Running a single configuration.

Creates persistent cache directories at ~/.ammm/compiled/.

Terminal window
export AMMM_CACHE_MODE=reuse
python your_script.py

Use when:

  • Repeatedly experimenting with different configurations
  • Want to reuse compiled graphs across sessions
  • Working with limited compute resources

Advantages:

  • Faster subsequent runs with the same configuration
  • Caches persist across sessions
  • Suitable for development/research

Disadvantages:

  • Caches accumulate over time
  • Requires manual pruning

Creates temporary cache directories deleted after model fitting.

Terminal window
export AMMM_CACHE_MODE=temp
python your_script.py

Use when:

  • CI/CD pipelines
  • One-off model runs
  • Want automatic cleanup
  • Limited disk space

Advantages:

  • Automatic cleanup
  • No cache accumulation
  • Suitable for production/automation

Disadvantages:

  • No cache reuse across runs
  • Slightly slower for repeated runs

Enable detailed cache logging:

Terminal window
export AMMM_CACHE_VERBOSE=1
export AMMM_CACHE_MODE=reuse
python your_script.py

Output:

[SessionManager] Using cache: /home/user/.ammm/compiled/pytensor_cache_abc12345
[SessionManager] Config hash: abc12345
[SessionManager] Cleaned up cache: /home/user/.ammm/compiled/pytensor_cache_abc12345
~/.ammm/compiled/
├── pytensor_cache_abc12345/ # Config 1 (2 channels)
├── pytensor_cache_def67890/ # Config 2 (3 channels)
└── pytensor_cache_ghi11121/ # Config 3 (4 channels)

Each directory is named with an 8-character hash of the configuration.

/tmp/
├── pytensor_cache_abc12345/ # Deleted after run
└── pytensor_cache_def67890/ # Deleted after run
Terminal window
ammm-cache info

Output:

============================================================
AMMM Cache Information
============================================================
Cache root: /home/user/.ammm/compiled
Exists: True
Cache count: 12
Total size: 2847.3 MB
Oldest cache: abc12345 (15.2 days old)
Newest cache: xyz98765 (0.3 days old)
============================================================
Terminal window
ammm-cache info --verbose

Remove caches exceeding specified limits using LRU (Least Recently Used) strategy.

Terminal window
# Use default limits (5GB total, 50 caches, 30 days max age)
ammm-cache prune
# Custom limits
ammm-cache prune --max-size 3000 --max-count 20 --max-age 14
# Dry run
ammm-cache prune --dry-run --verbose

Parameters:

  • --max-size <MB>: Maximum total cache size in megabytes (default: 5000)
  • --max-count <N>: Maximum number of cache directories (default: 50)
  • --max-age <days>: Maximum cache age in days (default: 30)
  • --dry-run: Show what would be removed without removing
  • --verbose: Show detailed list of removed caches

Remove all cache directories:

Terminal window
# Interactive confirmation
ammm-cache clear
# Force (skip confirmation)
ammm-cache clear --force
# Dry run
ammm-cache clear --dry-run

Warning: This removes all cached compilation results. Next runs will recompile from scratch.

Terminal window
ammm-cache info --cache-root /custom/path
ammm-cache prune --cache-root /custom/path

Testing different channel combinations without kernel restarts:

import os
os.environ["AMMM_CACHE_MODE"] = "reuse"
from src.driver import MMMBaseDriverV2
# Run 1: 2 channels
config_2ch = {...}
driver1 = MMMBaseDriverV2(config_2ch, ...)
driver1.fit_model()
# Run 2: 3 channels (same session)
config_3ch = {...}
driver2 = MMMBaseDriverV2(config_3ch, ...)
driver2.fit_model() # No tensor shape error
# Run 3: Back to 2 channels
driver3 = MMMBaseDriverV2(config_2ch, ...)
driver3.fit_model() # Reuses cache from Run 1

Automated model fitting with automatic cleanup:

#!/bin/bash
export AMMM_CACHE_MODE=temp
python fit_model_config_a.py
python fit_model_config_b.py
python fit_model_config_c.py
# Caches automatically cleaned up
#!/bin/bash
ammm-cache prune --max-age 7 --verbose
echo "Pruned old AMMM caches" | mail -s "Cache Cleanup" admin@company.com
import os
os.environ["AMMM_CACHE_MODE"] = "reuse"
os.environ["AMMM_CACHE_VERBOSE"] = "1"
from src.driver import MMMBaseDriverV2
driver = MMMBaseDriverV2(...)
driver.fit_model()

Issue: “Cannot change compiledir after PyTensor initialization”

Section titled “Issue: “Cannot change compiledir after PyTensor initialization””

Symptom: Warning about compiledir not being set effectively.

Cause: Some environments lock PyTensor settings after first import.

Solution:

  1. Restart the kernel/session
  2. Set AMMM_CACHE_MODE before any PyTensor/PyMC imports
  3. MMMBaseDriverV2 handles cache management automatically
Terminal window
ammm-cache info
ammm-cache prune --max-size 2000
# Or clear all
ammm-cache clear --force
Terminal window
# Check permissions
ls -ld ~/.ammm/compiled/
# Fix permissions
chmod 755 ~/.ammm/compiled/
# Or use custom cache root
export AMMM_CACHE_ROOT=/tmp/ammm_cache

Diagnostic Steps:

  1. Verify environment variable:

    import os
    print(os.getenv("AMMM_CACHE_MODE"))
  2. Check if cache isolation is active (look for log messages)

  3. Restart the Python kernel

  • Use AMMM_CACHE_MODE=reuse for iterative work
  • Enable AMMM_CACHE_VERBOSE=1 when debugging
  • Periodically prune caches (weekly/monthly)
  • Use AMMM_CACHE_MODE=temp for CI/CD
  • Monitor disk usage in long-running systems
  • Set up automated pruning for persistent caches
  • Use custom cache roots per user:
    Terminal window
    export AMMM_CACHE_ROOT=/shared/caches/$USER
  • Implement disk quotas
  • Schedule regular cleanup jobs
  • For repeated runs with same config, use reuse mode
  • First run compiles graphs (slower)
  • Subsequent runs reuse cache (faster)
  • Balance cache size vs recompilation time

Only shape-impacting parameters contribute to the hash:

{
'media': ['tv_spend', 'radio_spend'],
'extra_features_cols': ['control_1', 'promo'],
'prophet_cols': ['trend', 'yearly'],
'ad_stock_max_lag': 4
}

Hash: md5(json.dumps(config, sort_keys=True))[:8]

pytensor_cache_{hash}/
├── compiledir_<platform>/
│ ├── <function_hash_1>/
│ │ ├── mod.c
│ │ ├── mod.so
│ │ └── key.pkl
│ └── <function_hash_2>/
│ ├── mod.c
│ ├── mod.so
│ └── key.pkl
└── lock_dir/

Each cached function includes:

  • mod.c: Generated C code
  • mod.so: Compiled shared object
  • key.pkl: Function signature/metadata

Q: Does cache isolation slow down model fitting?
A: First run compiles (same speed). Subsequent runs with same config are faster in reuse mode.

Q: How much disk space do caches use?
A: Typically 50-200 MB per configuration. Use ammm-cache info to check.

Q: Can I use cache isolation in notebooks?
A: Yes. Set environment variables before importing AMMM:

import os
os.environ["AMMM_CACHE_MODE"] = "reuse"
from src.driver import MMMBaseDriverV2

Q: What happens if I change priors but keep channels the same?
A: The cache hash is based only on shape-impacting parameters. Changed priors use the same cache (safe, as model structure is identical).

Q: Can I manually clear a specific cache?
A: Yes, delete the directory:

Terminal window
rm -rf ~/.ammm/compiled/pytensor_cache_abc12345

Q: Does this work with hierarchical models?
A: Yes, cache isolation works with all AMMM model types.