I shipped the most complex thing I’ve ever built. This is the architecture.

It’s an asset tracker with a targeting engine. That description undersells it by about six layers.

Layer 1: Material Governance

Two source canons feed a synthetic material registry. A tagging table sits alongside it — program arrays, multiple attribute tracking, everything the business cares about flagged explicitly. These three feed a weekly-refreshed inferred dimension: every material code that exists, with full tag context.

This is the foundation. If your material registry is wrong, everything downstream is wrong. Getting this layer immutable and authoritative was the first problem to solve.

Layer 2: Asset Event Canon

The material governance feeds every material event that matters onto a single grain: material code × work order. BOM lines, material movements, and synthetics — all on the same row, same schema.

This is the Kimball discipline. One fact grain, consistently defined. The temptation is to split these into separate tables and join later. Don’t. The canon is only useful if it’s complete.

Layer 3: Asset Install Canon

Business rules evaluate the event canon and determine what’s actually going to get installed. Not what’s on the BOM. Not what moved. What’s going in the ground.

This layer is where the “source wins” logic lives. Two sources can disagree. The install canon resolves it deterministically, based on rules the business has signed off on. The SQL doesn’t decide — the config does.

Layer 4: The Config Layer

Fifty to sixty percent of the columns are arrays.

This is the Data Vault 2.0 influence — configuration-driven design taken seriously. The config layer handles five dimensions simultaneously: asset-to-metric mapping, install event classification, functional area, program, and budget rule zone. Arrays because a single asset can belong to multiple programs, multiple budget zones, multiple functional areas — simultaneously and over time.

You never modify the SQL to add a new program. You add a row to the config.

Layer 5: Scope Evaluator

The scope evaluator reads the config arrays and resolves what applies to what. It’s the translation layer between “here’s everything we track” and “here’s what this query cares about.”

This is where most systems break. They hardcode the scope logic into the SQL. When the business changes — and it always changes — the SQL breaks. The scope evaluator is the firewall between business rules and query logic.

Layer 6: Parametric Targeting Engine

The targeting engine evaluates progress against a period target using two inputs: previous performance and days remaining in the period.

It’s deterministic. Given the same inputs, it produces the same output. Given updated inputs, it prescribes exactly what needs to happen today to hit the period goal. Not a trend line. Not a projection. A number with a basis.

Consumption views sit on top and feed reporting assets. The reporting layer never touches raw data.

Why it’s durable

You never have to modify the SQL.

That’s the design goal, and it holds. New program? Config row. New asset class? Config row. New budget zone that spans three functional areas? Array in the config row.

The architecture absorbed two years of scope changes, three source systems, and a migration out of Power BI — without a SQL rewrite.

The lesson I took from it: designing the system matters more than delivering any single thing. A well-designed system absorbs change. A brittle one makes you earn every delivery twice.