While it is possible to install via PyPi:
pip install policyengine-us-datathe recommended installation is
pip install -e .[dev]
which installs the development dependencies in a reference-only manner (so that changes
to the package code will be reflected immediately); policyengine-us-data is a dev package
and not intended for direct access.
PRs must come from branches pushed to PolicyEngine/policyengine-us-data, not from
personal forks. The PR workflow hard-fails fork-based PRs before the real test suite
runs because the required secrets are unavailable there.
Before opening a PR, push the current branch to the upstream repo:
make push-pr-branchThat target pushes the current branch to the upstream remote and sets tracking so
gh pr create opens the PR from PolicyEngine/policyengine-us-data.
The following SSA data sources are used in this project:
- Latest Trustee's Report (2025) - Source for
social_security_aux.csv(extracted viaextract_ssa_costs.py) - Single Year Supplementary Tables (2025) - Long-range demographic and economic projections
- Single Year Age Demographic Projections (2024 - latest published) - Source for
SSPopJul_TR2024.csvpopulation data
PolicyEngine constructs its representative household datasets through a multi-step pipeline. Public survey data is merged, stratified, and cloned to geographic variants per household. Each clone is simulated through PolicyEngine US with stochastic take-up, then calibrated via L0-regularized optimization against administrative targets at the national, state, and congressional district levels, producing geographically representative datasets.
The Enhanced CPS (make data-legacy) produces a national-only calibrated dataset. For the current geography-specific pipeline, see docs/calibration.md.
The repo currently contains two calibration tracks:
- Legacy Enhanced CPS (
make data-legacy), which uses the olderEnhancedCPS/build_loss_matrix()path for national-only calibration. - Unified calibration (
docs/calibration.md), which usesstorage/calibration/policy_data.dband the sparse matrix + L0 pipeline for current national and geography-specific builds.
For detailed calibration usage, see docs/calibration.md and modal_app/README.md.
The pipeline runs as sequential steps in Modal:
make pipeline # prints the steps below
# 1. Build data (CPS/PUF/ACS → source-imputed stratified CPS)
make build-data-modal
# 2. Build calibration matrices (CPU, ~10h)
make build-matrices
# 3. Fit weights (GPU, county + national in parallel)
make calibrate-both
# 4. Build H5 files (state/district/city + national in parallel)
make stage-all-h5s
# 5. Promote to versioned HF paths
make promoteThe paper requires a LaTeX distribution (e.g., TeXLive or MiKTeX) with the following packages:
- graphicx (for figures)
- amsmath (for mathematical notation)
- natbib (for bibliography management)
- hyperref (for PDF links)
- booktabs (for tables)
- geometry (for page layout)
- microtype (for typography)
- xcolor (for colored links)
On Ubuntu/Debian, you can install these with:
sudo apt-get install texlive-latex-base texlive-latex-recommended texlive-latex-extra texlive-fonts-recommendedOn macOS with Homebrew:
brew install --cask mactexTo build the paper:
make paperTo clean LaTeX build files:
make clean-paperThe output PDF will be at paper/main.pdf.
The documentation uses Jupyter Book 2 (pre-release) with MyST. To install:
# Install Jupyter Book 2 pre-release
pip install --pre "jupyter-book==2.*"
# Install MyST CLI
npm install -g mystmdTo build and serve the documentation locally:
cd docs
myst startOr alternatively from the project root:
jupyter book start docsBoth commands will start a local server at http://localhost:3001 where you can view the documentation.
The legacy Makefile command:
make documentationNote: The Makefile uses the older jb command syntax which may not work with Jupyter Book 2. Use myst start or jupyter book start docs instead.
Each US data release now publishes both:
release_manifest.jsontrace.tro.jsonld
The release manifest remains the operational source of truth for:
- published artifact paths and checksums
- build IDs and timestamps
- build-time
policyengine-usprovenance
trace.tro.jsonld is a generated TRACE declaration built from that manifest. It gives a
standards-based provenance export over the same release artifacts, including a
composition fingerprint across the release manifest and the artifacts it describes.
The TRO uses the canonical TROv 0.1 vocabulary and
surfaces PolicyEngine-specific build provenance under the https://policyengine.org/trace/0.1#
extension namespace. Structured fields on the performance node
(pe:dataBuildFingerprint, pe:builtWithModelVersion, pe:builtWithModelGitSha,
pe:dataBuildId, pe:emittedIn) let a verifier cross-check this TRO against the
certified-bundle TRO emitted by policyengine.py without parsing prose.
The emitted TRO is validated against policyengine_us_data/schemas/trace_tro.schema.json.
Important boundary:
- the TRACE file does not replace the release manifest
- the TRACE file does not decide model/data compatibility
For the broader certified-bundle architecture, see
policyengine.py release bundles
and the official TRACE specification.