Skip to content

How We Compare

Given the popularity of the Media Mix Modelling (MMM) approach, numerous packages are available. Below is a concise comparison:

FeaturePyMC-MarketingRobynOrbit KTRMeridian*AMMM
LanguagePythonRPythonPythonPython
ApproachBayesianTraditional MLBayesianBayesianBayesian
FoundationPyMC-STAN/PyroTensorFlow ProbabilityPyMC + JAX
CompanyPyMC LabsMetaUberGoogleIndependent
Open sourceYesYesYesYesYes
Model BuildingYesYesYesYesYes
Out-of-Sample ForecastingYesNoYesNoNo
Budget OptimiserYesYesNoYesYes
Time-Varying InterceptYesNoYesYesYes
Time-Varying CoefficientsYesNoYesNoNo
Custom PriorsYesNoNoYesYes
Custom Model TermsYesNoNoNoNo
Lift-Test CalibrationYesYesNoYesYes
Geographic ModellingYesNoNoYesNo
Unit-TestedYesNoYesYesYes
MLflow IntegrationYesNoNoYesNo
GPU Sampling AccelerationYesN/ANoYesYes
Prophet IntegrationNoNoNoNoYes
Automated Outlier HandlingNoNoNoNoYes
Model Selection (ELPD)NoNoNoNoYes
Transfer Entropy AnalysisNoNoNoNoYes
Stationarity TestingNoNoNoNoYes
Consulting SupportProvided by AuthorsThird-party agencyThird-party agencyThird-party agencyProvided by Author

*Meridian has been released as successor of Lightweight-MMM, which has been deprecated by Google

Last updated: 2025-08-07
Last reviewed: 2025-10-06


These libraries for MMM models implement different flavours of Bayesian models. While they share a broadly similar statistical foundation, they differ in API flexibility, underlying technology stack, and implementation approach.

PyMC-Marketing is a widely used open-source library with an extensive feature set and strong community support. Its flexibility makes it suitable for teams with complex requirements, though this breadth comes with a significant learning curve. While AMMM is built using PyMC (the underlying probabilistic programming framework), it is not a fork of PyMC-Marketing. AMMM represents a distinct implementation philosophy focused on statistical rigour, model stability, and practical usability for typical MMM use cases.

Other libraries have their own strengths. For example, Google Meridian features a more opinionated API and integration with the Google ecosystem, which can be advantageous for organisations already embedded in Google’s stack.

Your optimal choice should depend primarily on:

  1. Your team’s technical expertise
  2. Complexity of your data and client use cases
  3. Preference for an independent open-source solution vs. one that is closed source
  • Your team primarily uses R instead of Python
  • You prefer a “simpler” but less rigourous approach than Bayesian Models (Ridge regression)
  • Your MMM data tends to be relatively simple and number of channels small
  • You want a simplified (albeit less flexible) API to build models across geographies
  • You want strong integration with other Google products such as Collab
  • You have the expertise to work with and debug TFP (which can be non-trivial)
  • You want its advanced statistical modelling capabilities (e.g., Gaussian Processes) and understand the complexity–interpretability–stability trade-offs
  • Integration into broader data science workflows is important (MLflow)
  • You prefer independence from major ad publishers and networks
  • Professional consulting support is available (but costly)
  • Statistical rigour and model stability are your top priorities
  • You want efficient use of degrees of freedom (Prophet integration for holidays vs individual dummy variables)
  • Automated seasonality detection is preferred over manual specification (Prophet vs knots/splines)
  • You need rigourous model diagnostics and selection criteria (ELPD, transfer entropy, stationarity tests)
  • Your dataset has outliers that need robust handling (quasi-winsorisation)
  • You prefer 100% in-sample inference for typical MMM sample sizes (52-104 weeks)
  • You value a statistically sound, battle-tested approach over extensive customisation options
  • Consulting support from the package author is available

Glossary

  • Out-of-Sample Forecasting: Producing predictions for future time periods beyond the observed time horizon. This is distinct from evaluating a model on a held-out test split within the observed data.