Tag: Enterprise Architecture

  • Transactions: Why Embedding Structures Breaks Everything

    Remember Post 12? Three months to restructure sales territories.

    Now you know why.

    Posts 12-18 established the framework. Master Data = structures (classification, operating standards, relationships). Master Records = entities with minimal properties. Three separate layers with different authority patterns, different mutability, different governance.

    Now let’s go back to that restructuring nightmare and see exactly what went wrong.

    Because once you see it with framework language, you can’t unsee it.

    What You Actually Embedded

    When you put territory in a transaction, you embedded Master Data.

    From Post 13: Master Data is structures. Classification structures that organize entities into hierarchies. “Regional Sales > North Region” isn’t an entity. It’s a classification structure. Management created it to organize customers. Management controls it. Management changes it when strategy changes.

    That’s Master Data.

    You embedded it in a transaction.

    Sale Transaction (Embedded Structure):Customer: #12345Product: SKU-789Territory: Regional Sales > North RegionAmount: £10,000Date: 2023-06-15

    “Regional Sales > North Region” is Master Data. Strategic authority. Mutable by management decision. Subject to restructuring when business needs change.

    The transaction is immutable. What happened on 2023-06-15 doesn’t change. Customer bought product for amount. That’s a captured fact.

    When you embed Master Data (mutable structure) in Transaction (immutable fact), you create an architectural violation.

    The unchangeable contains the changeable. When structure changes, historical facts break.

    What You Should Have Referenced

    From Post 14: Master Records are entities with minimal intrinsic properties.

    Customer #12345 is a Master Record. Product SKU-789 is a Master Record. These are things that exist. Entities with properties.

    Transactions should reference Master Records by ID. Just the ID. Not properties. Not classifications. Not structures.

    Sale Transaction (Minimal):Customer: #12345Product: SKU-789Amount: £10,000Date: 2023-06-15

    Just IDs pointing to entities. The transaction captures what happened. The Master Records contain entity properties (time-stamped). The Master Data structures contain classifications (versioned).

    Reporting joins them at query time using dates and versions.

    That’s the separation that prevents the restructuring nightmare.

    The Authority Violation

    From Post 6: Authority Rule determines how data changes.

    Master Data has internal strategic authority. Management controls territory structures. They can redefine what territories exist. They change structures when strategy changes. Strategic authority means strategic mutability.

    Transactions have captured authority. What happened stays what happened. The sale on 2023-06-15 is a fact. Facts don’t change. Captured authority means immutability.

    When you embed strategic authority (mutable) in captured authority (immutable), authority patterns conflict.

    Management needs to change territory structures. But changing structures breaks historical transactions. The architectural violation creates the three-month project.

    Different authority. Different mutability. Can’t coexist in same data structure.

    The Three Structure Types You Embed

    Post 13 identified three Master Data structure types. You embed all three in transactions.

    Classification structures embedded:

    Territory hierarchies. Product taxonomies. Org structures. Customer segments. “Regional Sales > North Region” stamped in transaction. “Desktop Computing > Business Laptops” stamped in order. “Finance Division > Cost Center 4520” stamped in expense. “Enterprise Tier” stamped in customer interaction.

    Every restructure becomes a data migration project.

    Operating standard structures embedded:

    Payment terms. Service tiers. Delivery methods. “Net30” stamped in invoice. “Gold Service Level” stamped in support ticket. “Express Delivery” stamped in shipment.

    Every operational change requires updating historical records.

    Relationship structures embedded:

    Party-to-location assignments. Corporate hierarchies. Network memberships. “Ships to Warsaw Plant” stamped in order. “Subsidiary of Holdings” stamped in revenue record. “Member of Consortium with 15% discount” stamped in transaction.

    When party opens new locations (Post 20 Scenario 1), when corporate hierarchy restructures (Post 20 Scenario 2), when network memberships change (Post 20 Scenario 3) — every transaction with embedded relationships requires updating.

    All three structure types are Master Data. All three are mutable. All three get embedded in immutable transactions. That’s the architectural violation repeated across every structure type.

    Why Traditional Fixes Fail

    Post 11 showed three traditional fixes. Now you can see why they fail.

    Fix 1: Repost everything

    Update every transaction from Master Data v1.0 structure to Master Data v2.0 structure.

    Why it fails: You’re overwriting captured facts with new classifications. The transaction says it happened in “North Region Enterprise” but it actually happened in “North Region” (v1.0 didn’t have Enterprise/Mid-Market split). Historical truth lost. Query-time classification would preserve both: what structure existed when transaction happened, and how that transaction would classify under current structure.

    Fix 2: Mapping tables

    Map old structure to new structure. “North Region” → “North Region Enterprise” mapping.

    Why it fails: You’re mapping Master Data v1.0 to v2.0, but you still embedded structures in transactions. The mapping is relationship structure Master Data trying to correct embedded structure Master Data. Two structure types, both needing governance, both needing versioning, compounding complexity. Should have separated structures from transactions from the start.

    Fix 3: Keep both structures

    Add new field. Territory_v1, Territory_v2, Territory_v3 as structures change.

    Why it fails: You’re embedding multiple Master Data structure versions in same transaction. Schema changes every time management restructures. Bloat increases with each version. Reporting conditional logic explodes. Should have versioned structures separately, not added fields to transactions.

    All three fixes try to work around embedded structures. None fix the root problem: structures shouldn’t be in transactions at all.

    Territory Restructuring Revisited

    Let’s return to Post 11’s main example with framework language.

    What you embedded: Classification structure Master Data (“Regional Sales > North Region”)

    What you should reference: Customer Master Record ID (#12345)

    What should be separate: Territory structure Master Data, versioned independently

    How reporting should work: Join transaction (customer #12345) + Master Record properties (revenue, country as of date) + Territory structure version → territory assignment for that date

    When territory restructures: Version structure to v2.0. Transactions unchanged. Reporting with v2.0 reclassifies same transaction as “North Region Enterprise.” Both classifications available. No reposting.

    The same pattern applies to product reclassification (taxonomy changes), org restructures (hierarchy changes), and customer segmentation (classification rule changes). Different structure types. Same violation. Same fix. Separate structures from transactions.

    What Minimal Transactions Enable

    From Post 14: Master Records contain minimal intrinsic properties. No derived classifications. No calculated values. Just what’s true about the entity.

    Transactions follow the same principle. Minimal. Just what happened.

    Customer ID, Product ID, Amount, Date. Not customer properties. Not product properties. Not territory structures. Not taxonomy classifications. Just references to things and facts about the event.

    Why minimal? Because everything you embed creates dependencies.

    Embed territory: Dependent on territory structure version. When structure changes, transaction needs updating or mapping or dual-fielding.

    Embed customer properties: Dependent on property values at transaction time. Customer relocates, property changes, historical transaction has outdated property. Time-stamped Master Records handle this. Transactions just reference ID.

    Embed product classification: Dependent on taxonomy version. Taxonomy changes, classification changes, transaction becomes historically inaccurate. Versioned Master Data handles this. Transactions just reference ID.

    Minimal transactions have minimal dependencies. Maximal flexibility. When business changes, minimal transactions don’t need changing.

    The Three-Layer Pattern Completed

    Posts 11-12 introduced problem and solution. Territory restructure takes three months because structures are embedded. Three-layer pattern separates them.

    Post 13 explained Master Data. Structures that organize. Classification, operating standards, relationships. Strategic authority, mutable, versioned.

    Post 14 explained Master Records. Entities with minimal properties. Operational authority, time-stamped, stable IDs.

    Posts 19-20 showed relationship structures as Master Data. Cross-system mappings, party-to-location, corporate hierarchies, network memberships. All relationship structures. All Master Data. All separate from entities.

    Post 17 showed Contracts as bilateral agreements. What belongs in contracts versus what stays in structures.

    Now Post 21 completes the pattern. Transactions are immutable captured facts with minimal references. No embedded properties. No embedded structures. Just IDs pointing to Master Records.

    Three layers:

    • Transactions (minimal, immutable – just IDs and facts)
    • Master Records (entity properties, time-stamped)
    • Master Data (structures, versioned independently)

    Reporting joins them at query time. Transaction date + Master Record effective date + Master Data version = correct classification for that point in time.

    No reposting. No migration. No three-month projects. Routine structural change.

    Why Embedding Breaks Everything

    The title isn’t hyperbole.

    When you embed mutable structures in immutable facts, three things collapse:

    Strategic flexibility stops. Management can’t restructure without IT projects. Business decisions wait for technical implementation. Territory restructure? Three months. Taxonomy change? Migration project. Org reorganization? Mass data update. Every strategic shift becomes a technical obstacle. Competitors who can adapt quickly gain advantage while you’re stuck in implementation.

    Historical accuracy disappears. Reposting overwrites what actually happened. Mapping tables approximate relationships. Dual-fielding creates confusion about which version to trust. The sale that rolled up to “North Region” in 2023 now claims it was always “North Region Enterprise.” Query results don’t match what people remember. Original truth lost.

    Architecture ossifies. Every structure change requires transaction updates. Every property change requires field additions. Every new classification requires schema changes. The system becomes a cage instead of an enabler. Adding capabilities means touching immutable records. Evolution stops because change is too expensive.

    Everything breaks because you embedded mutable structures in immutable facts.

    What Separation Enables

    Separate structures from transactions, and architecture transforms:

    Strategic flexibility returns. Management restructures territories. Version the structure. Transactions unchanged. Reporting adapts. Days instead of months. New taxonomy? Version it. Org change? Version it. Business agility becomes routine, not exceptional.

    Historical accuracy preserves. Transaction captured what happened. Master Record properties time-stamped. Master Data structures versioned. Query any point in history with correct context. Compare across time using appropriate classifications. Users verify accuracy against their memory. Trust builds because data proves consistent.

    Architecture enables. Add structures without touching transactions. Change classifications without migration. Evolve organization without reprocessing. System adapts instead of constraining. New capabilities extend structures, not schemas. Growth becomes architectural, not disruptive.

    Everything works because immutable facts stay separate from mutable structures.

    The Test Returns

    From Post 11: Next time someone says “We need to restructure territories,” ask “How long will it take?”

    Now you can diagnose why:

    If answer is “three months” → structures embedded in transactions

    If answer is “need to repost historical data” → Master Data not versioned separately

    If answer is “need mapping tables” → trying to fix embedded structures instead of separating

    If answer is “can’t compare historical to current” → lost query-time composition capability

    If answer is “update the structure, version it, reporting adapts” → proper separation achieved

    One architecture requires three months. The other requires structure update. Same business decision. Completely different implementation cost.

    The difference is whether you embedded structures in transactions or separated them as Master Data.

    Understanding why embedding structures breaks everything explains why separation matters. Not just cleaner architecture. Strategic flexibility. Business agility. Competitive advantage.

    Architecture that enables rather than constrains.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • The Party Problem: Seven Ship-To Locations

    One customer, seven locations. Management wants to restructure territories. How long will it take?

    If your architecture requires updating every location record, you’ve embedded structures in entities. If it versions a single structure, you’ve separated them correctly.

    Let’s test the Three-Layer framework under increasing complexity.

    Scenario 1: Basic Party-to-Location

    Acme Corporation has seven ship-to locations: Warsaw plant, Krakow distribution center, Berlin office, Prague warehouse, Budapest facility, Vienna office, London headquarters.

    The temptation: Store locations as properties on the customer record. Seven address fields. Or create a locations table with customer ID stamped on each row.

    The problem: When Acme opens an eighth facility, what changes? When they close Prague warehouse, what updates? Every change touches either the party record or requires updating stamped location records.

    Framework approach:

    Acme Corporation = Master Record. Minimal intrinsic properties: legal name, tax ID, registration date, entity type. No locations embedded.

    Each location = Master Record. Warsaw plant, Krakow distribution, Berlin office — each is an entity with its own minimal properties: address, delivery hours, loading dock capacity, operational status.

    Party-to-location assignments = Master Data. Relationship structure connecting party to locations:

    Acme Corporation (Party)├── Warsaw plant├── Krakow distribution├── Berlin office├── Prague warehouse├── Budapest facility├── Vienna office└── London headquarters

    This is Master Data from Post 13. Relationship structures. Maintained by operations. Versioned when changes occur.

    When Acme opens eighth facility: Create location Master Record for new facility. Add relationship in party-to-location structure. Party Master Record unchanged. Existing location Master Records unchanged. Just relationship structure updated.

    When Acme closes Prague warehouse: Update location Master Record status to inactive. Remove relationship in party-to-location structure (or version it with end date). Party Master Record unchanged. Other locations unchanged.

    When Acme restructures operations: Reassign locations to different operational roles. Relationship structure versions. Party and location Master Records stay unchanged unless actual properties like delivery hours change.

    The separation holds. Changes affect relationship structures. Master Records stay minimal.

    Scenario 2: Corporate Hierarchy

    Acme Corporation isn’t standalone. It’s a subsidiary of Acme International Holdings, which is owned by Global Industrial Corp.

    Three-level corporate hierarchy. How do you trace ownership chains? How do you handle reorganizations?

    The temptation: Add parent company ID to party Master Record. Acme record includes “Parent: Acme International Holdings” as property.

    The problem: When Global Industrial restructures — sells division, acquires competitor, merges subsidiaries — every party Master Record in the affected hierarchy requires updates. Parent IDs change. Historical reporting breaks unless you’ve time-stamped every ownership change at the party level.

    Framework approach:

    Each entity = Master Record. Acme Corporation, Acme International Holdings, Global Industrial Corp — all party Master Records with minimal properties. No parent IDs embedded.

    Corporate hierarchy = Master Data. Relationship structure defining ownership:

    Global Industrial Corp└── Acme International Holdings    └── Acme Corporation        └── (7 ship-to locations from Scenario 1)

    This is classification structure Master Data — hierarchical, but mapping relationships between entities. Versioned independently.

    When Global Industrial acquires competitor: Add competitor parties as Master Records. Update corporate hierarchy structure to show new ownership relationships. Version the structure. Historical reporting uses historical hierarchy version. Current reporting uses current version. Existing party Master Records unchanged.

    When Acme International Holdings is sold: Update corporate hierarchy structure. Acme International (and all subsidiaries) move to new parent. Version the structure with effective date. Historical transactions use historical hierarchy. Current transactions use current hierarchy. Party Master Records unchanged.

    The pattern holds. Relationship structures handle hierarchies. Master Records stay minimal. Changes version structures, not entities.

    Scenario 3: Network Membership

    Acme Corporation is member of European Manufacturing Consortium. The consortium negotiated supplier agreements providing 15% volume discount with preferred vendors.

    How do you know Acme gets the discount? How do you apply it to transactions? How do you handle when Acme joins or leaves networks?

    The temptation: Add discount percentage to Acme’s Master Record. Or add “Network: EMC” as property with associated discount terms.

    The problem: When consortium renegotiates terms — 15% becomes 18% — you update every member party record. When party leaves consortium, you remove discount from their record. Party Master Records become repositories for contractual terms that belong elsewhere.

    Framework approach:

    European Manufacturing Consortium = Master Record. The network itself is an entity with minimal properties: name, registration details, contract terms. The discount terms (15% volume discount) are intrinsic properties of the consortium entity.

    Network membership = Master Data. Relationship structure connecting parties to networks:

    European Manufacturing Consortium (Master Record: 15% discount)├── Acme Corporation (member)├── TechMfg Industries (member)├── Alpine Manufacturing (member)└── Nordic Production Group (member)

    Parties don’t have discounts stamped on them. Networks have discount terms in their Master Records. Membership connects parties to networks through relationship structures.

    How does transaction get the discount?

    Transaction ships product to Acme Corporation. Query process:

    1. Check party-to-network mapping (Master Data) → Acme is member of EMC
    2. Query EMC Master Record → discount terms = 15%
    3. Apply discount to transaction pricing

    Query-time composition. Not pre-stamped discount on party record.

    When consortium renegotiates discount: Update EMC Master Record. 15% becomes 18%. Time-stamp the change. Party Master Records unchanged. Membership structure unchanged. All members automatically get new discount for new transactions. Historical transactions used historical discount percentage from EMC Master Record effective date.

    When Acme leaves consortium: Update membership structure. Remove Acme from EMC membership. Version with end date. Party Master Record unchanged. EMC Master Record unchanged. Future transactions query membership, find no match, no discount applies.

    The framework holds. Network terms live in network Master Records. Membership is relationship structure. Compose at query time. No party records bloated with contractual terms.

    Scenario 4: Territory Restructuring

    Management restructures sales territories. Currently organized by country offices. Changing to regional model.

    Current structure: Six country offices (Poland, Germany, Czech, Hungary, Austria, UK) each managing their local Acme locations.

    New structure: Two regional offices (Western Europe, Eastern Europe) managing consolidated territories.

    Six country offices become two regional offices. Territory restructure. This is the Post 11 problem applied to parties.

    The embedded approach:

    Sales office assignment is a property on location Master Records. Update every affected location:

    • Warsaw Master Record: Change “Sales Office: Poland” to “Sales Office: Eastern Europe”
    • Prague Master Record: Change “Sales Office: Czech” to “Sales Office: Eastern Europe”
    • Berlin Master Record: Change “Sales Office: Germany” to “Sales Office: Western Europe”
    • Vienna Master Record: Change “Sales Office: Austria” to “Sales Office: Western Europe”

    Every location in Europe requires Master Record update. Historical reporting breaks unless you’ve time-stamped every sales office change at the location level.

    This is exactly the problem Post 11 described. Territory restructure requires reprocessing Master Records.

    Framework approach:

    Location Master Records contain minimal properties: address, delivery hours, operational capacity. No sales office assignment embedded.

    Sales territory assignment = Master Data. Classification structure organizing locations by sales responsibility:

    Sales Territory Structure v1.0 (through 2025-12-31):├── Poland Sales Office│   ├── Warsaw plant│   └── Krakow distribution├── Germany Sales Office│   └── Berlin office├── Czech Sales Office│   └── Prague warehouse├── Hungary Sales Office│   └── Budapest facility├── Austria Sales Office│   └── Vienna office└── UK Sales Office    └── London headquartersSales Territory Structure v2.0 (from 2026-01-01):├── Western Europe Sales Office│   ├── Berlin office│   ├── Vienna office│   └── London headquarters└── Eastern Europe Sales Office    ├── Warsaw plant    ├── Krakow distribution    ├── Prague warehouse    └── Budapest facility

    Sales office assignment is Master Data. It’s a classification structure organizing locations by operational responsibility. Management controls it. Changes when strategy changes.

    When management restructures: Create new structure version. Effective date 2026-01-01. New regional model.

    What changes: One structure.

    What stays stable: All location Master Records, party Master Records, party-to-location relationships, corporate hierarchies, network memberships, transactions.

    Historical reporting: “Show me sales performance by office for 2025.”

    Query uses Sales Territory Structure v1.0. Warsaw rolls up to Poland office. Berlin rolls up to Germany office. Historical structure preserved.

    Current reporting: “Show me sales performance by office for 2026.”

    Query uses Sales Territory Structure v2.0. Warsaw rolls up to Eastern Europe office. Berlin rolls up to Western Europe office. Current structure applies.

    Trending across change: “Show me Warsaw plant performance 2025 vs 2026 by responsible sales office.”

    • 2025: Query with v1.0 structure → Poland office
    • 2026: Query with v2.0 structure → Eastern Europe office

    Both correct for their time periods. No location Master Records changed. Structure versioned.

    This applies the same principle from Post 11 to a different layer. Territory restructures version Master Data structures. Master Records stay unchanged. Transactions stay unchanged. Reporting adapts using version dates.

    If you embedded sales office in location Master Records: Three-month project to update every location record. Migration scripts. Data validation. Testing across all systems consuming location data. Reporting breaks if you don’t update historical assignments.

    Embedded structures create the restructuring nightmare Post 11 described. Separated structures enable routine strategic changes.

    What The Scenarios Reveal

    Four scenarios. Increasing complexity. Multiple relationship types. Real-world changes.

    The framework holds because:

    Master Records stay minimal. Parties, locations, networks, corporate entities — all contain only intrinsic properties. No embedded relationships. No stamped structures.

    Relationship structures handle connections. Party-to-location, corporate hierarchy, network membership, territory assignment — each is separate Master Data. Each versions independently. Each governs independently.

    Query-time composition assembles views. Reporting joins transactions, Master Records, and relationship structures using dates and versions. No pre-consolidated data. No forced migration.

    Changes affect structures, not entities. Open location, update structure. Restructure hierarchy, update structure. Change membership, update structure. Reassign territories, update structure. Master Records unchanged. Transactions unchanged.

    What breaks if you embed:

    Embed locations in party record: Opening location changes party. Closing location changes party. Every location change touches party Master Record.

    Embed hierarchy in party record: Acquisition changes every subsidiary party. Restructuring changes every affected party. Corporate changes cascade through Master Records.

    Embed discount in party record: Renegotiate terms, update every member party. Leave network, update party. Join network, update party. Contractual changes require mass party updates.

    Embed territory in location record: Restructure territories, update every location. Strategic changes become migration projects.

    Embedded structures create cascading changes. Separated structures enable isolated updates.

    The pattern is consistent across all four scenarios: separate by authority, version independently, compose at query time. Master Records hold operational authority over entity properties. Master Data holds strategic authority over structures. When structures change, Master Records stay stable.

    This isn’t just cleaner data models. It’s architecture that scales with complexity instead of breaking under it. Each new relationship type is a new structure. Each strategic change is a structure version. No migration. No reprocessing. No cascading updates.

    The Three-Layer framework separates immutable transactions from mutable structures in Post 11. The Master Data / Master Records separation prevents embedding mutable structures in entity properties. Same principle. Different application. Consistent framework.

    Understanding relationship structures as Master Data changes what’s architecturally possible. Not just theory. Strategic flexibility through proper separation.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • When One Entity Lives in Many Systems

    One entity, multiple systems — that’s the reality of every large enterprise.

    Customer #12345 exists in CRM, again in ERP, again in Billing. The immediate instinct is to fix it. One source of truth, says the playbook. Master Data Management will unify everything.

    I’m questioning that premise.

    Not because I’ve watched Master Data Management projects fail — although I have, repeatedly — but because I’ve watched them stall before they ever began. Teams spend months defining the business value, mapping attributes across systems, debating ownership and governance, and designing the theoretical “golden record.” Then nothing ships.

    Maybe the problem isn’t execution. Maybe the premise itself is wrong — especially for enterprises that already have functioning systems.

    Why One Entity Lives in Many Systems

    This isn’t duplication. It’s not “the same data copied everywhere,” and it’s not a quality issue waiting to be resolved. It’s specialisation.

    Different systems hold different slices of the same reality because they serve different operational purposes. CRM manages customer relationships and sales opportunities. ERP manages financial transactions. Billing manages invoices and payments. Each needs different data, different workflows, and different operational contexts.

    CRM is designed for sales performance, ERP for financial integrity, Billing for payment processing. They overlap because the real world overlaps — but that overlap isn’t an error. It’s a feature of focus.

    The Golden Record Promise

    Traditional Master Data Management promises to fix this through consolidation.

    Create a single master record — the Golden Record — that every system references. One authoritative version of the truth. One comprehensive customer profile.

    The promise is seductive: instead of scattered customer data, a single authoritative source. Instead of synchronisation problems, guaranteed consistency. Systems point to one shared entity, and inconsistency disappears.

    In practice, it rarely works that way.

    To make it happen, every existing system has to stop using its own customer table and start using the new master table. CRM must change every query, workflow, and integration point. ERP must do the same. Billing too. What sounds like a simple reference change is actually a fundamental rewrite of each system’s data model — while those systems are still running live business operations.

    The design phase can last months. The migration phase is often never completed. The risk is too high, the disruption too great, and the promised benefit too abstract to justify the upheaval.

    Maybe the problem isn’t that teams can’t execute MDM. Maybe the Golden Record itself — the idea of collapsing specialisation into one master object — is the wrong architecture for brownfield systems.

    Relationship Structures as Master Data

    The Three-Layer Architecture framework reveals a different approach: the mappings between systems are relationship structures — the third type of Master Data from Post 13. Not a workaround or an integration patch, but the structural layer that connects specialised systems into a coherent whole.

    In most enterprises, CRM holds Customer #12345 in Salesforce for sales operations, ERP holds the same entity as #98765 in SAP for financial management, and Billing keeps its own version, #45678, for invoicing. Support and Order Management have theirs as well. Each record represents the same organisation — Acme Corporation — but in a different operational context.

    Each system maintains its own Master Record, containing only the intrinsic properties it truly needs to function. CRM tracks legal name, tax ID, registration date, and primary contact. ERP maintains the corporate identifiers required for accounting and compliance. Billing stores delivery and invoice preferences. Nothing duplicated, nothing redundant, nothing derived — each record remains minimal by design.

    The connective tissue is the relationship structure itself:

    Real-world entity: Acme Corporation├── CRM: CRM-12345├── ERP: ERP-98765├── Billing: BILL-45678├── Support: SUP-11223└── Order Management: OM-99887

    These mappings — the explicit relationships that say these identifiers all refer to the same entity — are relationship structures, the third Master Data type. Governed and versioned like classification structures and operating standards, but connecting entities instead of organizing them.

    Most organisations treat this mapping data as a nuisance: integration scaffolding to be replaced once the mythical Golden Record arrives. That view misses the point. Cross-system mappings are not a symptom of fragmentation; they are a reflection of system specialisation.

    They express how reality looks inside a mature enterprise — multiple domains, each optimised for its own purpose, connected through governed relationships rather than collapsed into a single model. When recognised as Master Data, these relationship structures stop being transient and start being architectural assets.

    Architecture, Tools, Operations

    The responsibility for relationship structure data divides cleanly across three functions.

    Architecture defines what must be mapped: customer identifiers between CRM and ERP, party-to-location relationships, or product mappings across order management and billing.

    Development provides the tools — small, focused applications that maintain these relationship structures with standard interfaces and full version history.

    Operations owns the ongoing data maintenance, ensuring that when a new customer appears in ERP or a system is replaced, the relationships are updated and versioned correctly.

    This is Master Data as designed structure, not as a centralised database. The emphasis shifts from consolidation to composition — from trying to eliminate boundaries to managing them transparently.

    A customer-to-system mapping tool does one thing. A party-to-location mapping tool does another. Product taxonomies, sales hierarchies, regulatory codes — each lives in its own governed module. When cross-system reporting is required, the join happens dynamically. Data remains in its native systems; the relationship structures tell the query where to go. The result is coherent, without the cost and fragility of continuous synchronisation.

    When someone asks, “Show me all support tickets for customers with revenue greater than £1 million,” no single Golden Record is queried. The system composes the answer. CRM identifies strategic customers. Relationship structure data locates their equivalents in the Support system. Revenue is calculated from transactions. The join occurs at query time — clean, auditable, current. Nothing is duplicated, and no nightly batch tries to keep everything in sync. Truth emerges from structure, not consolidation.

    Governance Requirements

    For this model to work, relationship structure data must be treated with the same discipline as any other Master Data.

    Ownership sits with enterprise data governance, not individual system teams. Every relationship carries an effective date and version history. Historical states remain queryable, so reporting always aligns with the truth at that point in time.

    System teams remain accountable for the accuracy of their own Master Records. Governance is accountable for the accuracy and integrity of the relationship structures that connect them. That separation keeps systems autonomous while preserving enterprise traceability.

    If relationship structures are governed Master Data, integration should stop being a migration exercise. Adding a new system becomes a matter of adding relationship structure data. Replacing an existing platform means updating relationships, not rebuilding the enterprise. Even mergers become manageable: relationship alignment replaces wholesale consolidation. Truth becomes temporal and relational, not monolithic. Architecture enables composition rather than control. Systems stay focused. Data stays coherent. And the Golden Record quietly becomes unnecessary.

    Connection to the Three-Layer Pattern

    This entire model fits naturally within the Three-Layer Architecture Pattern — the framework that separates Transactions, Master Records, and Master Data.

    Transactions are immutable events — sales, invoices, payments — always tied to specific system identifiers. Master Records define the entities themselves, with time-stamped properties that evolve over time. Master Data contains the mutable structures — classifications, operating standards, and relationship structures — that give those entities context and connect them across systems.

    When all three layers work together, truth becomes traceable through time.

    To understand what was true at any given point, you only need three coordinates:

    Transaction date+ Master Record effective date+ Master Data structure version= Truth at Time T

    This formula replaces the brittle idea of a single, universal “source of truth” with a verifiable chain of evidence.

    Each layer evolves independently but remains auditable through versioning.

    You can change a classification, replace a system, or restructure an organisation without rewriting history or replatforming your data.

    That’s the architectural breakthrough the framework reveals: separating how the business operates from how its truth is constructed.

    The Three-Layer Architecture framework reveals relationship structures as the missing governance layer. Not a workaround for MDM. Not a stepping stone to Golden Records. The actual architectural pattern for brownfield enterprises.

    What This Should Enable

    The framework suggests that once relationship structures are recognised as Master Data, architectural options should expand dramatically.

    Cross-system reporting no longer requires consolidation projects. You define the relationship structure requirements, build or configure mapping tools, and maintain the data. Query-time joins handle composition dynamically. Adding a new system no longer means rebuilding interfaces or forcing migrations — you simply extend the relationship structures.

    Replacing a legacy platform becomes incremental rather than catastrophic. You update relationship structures, maintain historical versions, and preserve continuity. Mergers or acquisitions stop being years-long data integration nightmares; instead, you link ecosystems through governed relationships. Both organisations keep operating on their existing systems while shared reporting emerges through mapped relationships.

    This is the pragmatic architecture the Three-Layer framework proposes for enterprises that already run on specialised systems and can’t afford the risk or downtime of a full rebuild. It turns “integration” from a transformation programme into a living structure of governed relationships.

    The difference is profound.

    Traditional Master Data Management tries to fix inconsistency through consolidation — a data operation that requires every system to change.

    This approach resolves inconsistency through design — an architectural separation that allows systems to remain independent while still aligning at the point of truth.

    It’s not a smaller version of MDM. It’s a different pattern entirely.

    One where data remains where it belongs, and relationships — not records — form the backbone of enterprise coherence.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • Master Records: The Entities Themselves

    Last post established the distinction: Master Data = structures, Master Records = entities, Transactions = events. Let’s now deep dive into Master Records and how we could architect them differently.

    What Master Records Are

    Master Records are specific entities. Individual things that exist in your business.

    Customers:

    • Customer #12345 (Acme Corporation)
    • Customer #67890 (TechCo Inc)

    Products:

    • Product SKU-789 (Business Laptop Model X)
    • Product SKU-456 (Enterprise Server Model Y)

    Locations:

    • Ship-to Location SL-10001 (Acme Warsaw Plant)
    • Ship-to Location SL-10002 (Acme Krakow Plant)

    Employees:

    • Employee S-456 (Jane Smith, salesperson)
    • Employee S-789 (John Chen, account manager)

    Each one is a thing. An entity. A specific instance. Not a category. Not a structure. An actual entity with properties and a state. This matters because entities have different architectural requirements than structures.

    Master Records Have Properties

    Customer #12345 isn’t just an ID. It’s an entity with characteristics:

    Customer #12345: Acme Corporation- Legal Name: Acme Corporation Ltd- Tax ID: UK123456789- Registration Date: 2010-03-15- Primary Address: London, UK- Status: Active (i.e. its state)- Contact Email: info@acme.com

    These are properties of THIS customer. They describe THIS entity. Not customers in general. This one.

    Properties change over time. Address changes when they relocate. Status changes when the relationship changes. That’s why Master Records need time-stamping:

    Customer #12345 properties:2020-01-01 to 2023-12-31:- Address: Manchester, UK- Status: Active2024-01-01 onwards:- Address: London, UK- Status: Active

    Properties evolve. Time stamps preserve what was true when.

    This is different from Master Data versioning. Master Records track property changes. Master Data tracks structural reorganizations.

    Master Records You Create

    Not all Master Records are discovered in the world. Some you define and create yourself.

    Products you specified:

    • SKU-789: 16GB RAM, 512GB SSD, Intel i7, 14″ screen
    • SKU-456: 64GB RAM, 2TB SSD, Dual Xeon, Rack-mount

    Your Locations you designated (or the one you outsourced production to)

    • SL-10001: Berlin plant, delivery hours 8am-6pm, loading dock capacity 5 trucks
    • SL-10002: Munich plant, 24-hour operations, rail access

    You decided these entities exist. You defined their properties. You control when they change.

    You create them. You operate with them. You govern their properties.

    But you don’t organize them – that’s what Master Data structures do.

    The Separation That Changes Everything

    Here’s what the Master Data / Master Records distinction actually means in practice:

    Master Records = Entities:

    • Customer #12345
    • Properties: Name, address, tax ID
    • Changes: When entity facts change (moved, renamed)
    • Governance: Operational (sales ops, customer service)
    • Authority pattern: Operational authority over properties

    Master Data = Structures organizing entities:

    • Territory hierarchy (Strategic Accounts > North Region > Enterprise Sales)
    • Classification rules (Revenue > $1M = Strategic Account)
    • Changes: When strategy changes (territory reorganization)
    • Governance: Strategic (management, architecture)
    • Authority pattern: Strategic authority over organization

    Customer #12345 is a Master Record. The territory structure that classifies Customer #12345 as “Strategic Account in North Region” is Master Data.

    The entity exists independently of how you classify it.

    That independence is architectural. You can restructure territories without touching customer records. You can change classification rules without reprocessing entities.

    The structures are mutable. The entity facts are what they are.

    Why Traditional MDM Fails Here

    Traditional MDM conflates these. “Master Data Management” becomes “manage all the important data” without separating entities from their organizing structures.

    Result: Territory restructuring requires reprocessing customer records. Taxonomy changes require updating product definitions. Org chart changes require modifying employee records.

    Three months to change a classification structure.

    Not because the decision is hard. Because you embedded mutable structures (Master Data) in stable entities (Master Records).

    The three-layer architecture prevents this:

    Transactions reference Master Records by ID. Just the ID.

    Master Records contain entity properties. Time-stamped so you know what was true when.

    Master Data contains classification structures. Versioned independently so management can restructure without touching operational data.

    Reporting joins them at query time. Transaction date + Master Record effective date + Master Data version = correct classification for that point in time.

    No reprocessing. No propagation. Query-time assembly.

    Master Record Principles

    Understanding Master Records as entities leads to specific architectural principles.

    These seem to hold across insurance, manufacturing, housing – 12+ years building business architecture:

    Master Record Principles

    Understanding Master Records as entities leads to specific architectural principles.

    These seem to hold across insurance, manufacturing, housing – 12+ years building business architecture:

    Principle 1: Minimal necessary properties

    Store intrinsic entity facts – not derived classifications, not calculated values. Just what’s true about this entity.

    Customer #12345 has legal name (Acme Corporation Ltd) and tax ID (UK123456789), but territory assignment belongs in Master Data structures, and customer lifetime value belongs in analytics.

    Why minimal? Because every property in a Master Record requires operational governance. More properties means more governance overhead. Master Data handles classification. Analytics handles calculation. Master Records handle entity facts.

    Principle 2: Time-stamped changes

    Properties change over time, so preserve history with effective dates. No “current state only” architecture. You’ll need historical reporting, audit trails, answers to “what did we know about this customer on June 15?”

    Time-stamp from the start because retrofitting temporal data is painful. Adding effective dates after the fact breaks everything.

    Principle 3: Single source per property type

    Don’t duplicate entity facts across systems. Pick authoritative source for each property type. Customer legal name comes from CRM. Customer shipping address comes from Order Management.

    The Master Record exists across systems, but each property has single authority. This is different from “golden record” approaches that try to create one master copy. The entity exists in multiple systems. Who has authority over specific properties is what matters.

    Principle 4: Stable identifiers

    IDs don’t change, ever. Customer #12345 stays #12345 forever, even if company name changes, even if acquired, even if relocated.

    Everything else in the Master Record can change. The ID never changes.

    Why? Because Transactions reference Master Records by ID. If IDs change, you break transaction history. Stable identifiers enable everything else to be mutable.

    Customer #12345 properties (2025-10-28)

    Customer #12345 properties:2020-01-01 to 2023-12-31:Address: Manchester, UKStatus: Inactive2024-01-01 onwards:Address: London, UKStatus: Active

    Let’s deep dive into Cross-System Master Records

    In real enterprises, Master Records live in multiple systems.

    CRM has customers. ERP has customers. Billing has customers.

    Not duplicates. Not golden records. Different systems, different purposes, same real-world entity.

    Understanding Master Records as entities changes how you handle this.

    You don’t try to consolidate. You don’t create golden records. You map across systems.

    Cross-system mapping structure (Master Data):

    Entity Mapping:Real-world: Acme Corporation├── CRM: CRM-12345├── ERP: ERP-98765└── Billing: BILL-45678

    The Master Records exist in their systems, with the properties each system needs. The mapping structure (Master Data) connects them.

    This is why the Master Data / Master Records distinction matters architecturally.

    Master Records = entities in operational systems, governed operationally. Master Data = structures that organize/map entities, governed strategically.

    No golden record needed. Just well-governed mappings.

    The Minimal Principle Revisited

    Master Records should contain minimum necessary properties.

    Not because minimalism is virtuous. Because governance overhead scales with property count.

    You include identity (ID, legal name), intrinsic facts (tax ID, registration date, serial number), contact and admin details (address, email, phone), and status (active, inactive, suspended). You don’t include derived classifications like territory, segment, revenue category etc – those belong in Master Data. You don’t include structural relationships like team membership, hierarchies, or networks – those belong in Master Data. You don’t include calculated values like credit score or lifetime value – those belong in the analytics layer.

    Each property you add requires operational governance (who updates, when, how), creates dependencies (who consumes this property), and adds to change management overhead (what happens when it changes). Derive classifications from properties plus structures at reporting time. Keep Master Records minimal. Push complexity to Master Data and reporting.

    Implementation Sequence

    The Master Data / Master Records distinction seem to change the implementation sequence.

    You can’t version Master Data structures until you stabilize Master Records.

    Start with Master Records. Identify your core entities – customers, products, employees, locations. Define intrinsic properties, asking what describes THIS entity, not how you classify it. Implement time-stamping to preserve property history.

    Then analyse what should be Master Data. Pull out structures like territories, taxonomies, and org charts. Version these structures separately from entities. Create classification rules. Implement strategic governance where management controls structures.

    Then remove the embedded structures from the Master Records – no territory assignments, no hierarchies. Establish operational governance around who updates what and when.

    Lastly, join everything at reporting time where properties plus structures equals classifications.

    Master Records are the foundation. Master Data structures organize them. Build a stable foundation first.

    To my understanding, this is the opposite of traditional MDM, which tries to build golden records (entities plus classifications) first. Understanding the distinction changes the implementation sequence.

    What This Changes

    Get Master Records right – stable entities with time-stamped properties – and Master Data becomes manageable.

    Get Master Records wrong – embedded structures, missing time-stamps, bloated properties – and Master Data governance becomes impossible.

    The distinction isn’t semantic. It’s architectural.

    Master Records = Stable entities with time-stamped properties

    • Operational governance
    • Property-level authority
    • Changes when entity facts change

    Master Data = Mutable structures organizing entities

    • Strategic governance
    • Structure-level authority
    • Changes when strategy changes

    Transactions = Minimal references to Master Records

    • Just IDs
    • Immutable events
    • No embedded properties

    Reporting = Query-time assembly

    • Join Transaction + Master Record + Master Data
    • Use effective dates and version numbers
    • Reconstruct classification at any point in time

    Clean separation. Clear governance. Operational stability. Strategic flexibility.

    That’s what the Authority rules give you with regards to Master Records as entities.

    Not just “entities to manage.” Foundation for architecture that scales.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • Master Data: The Layer That Changes

    Three layers solve the restructuring problem. Transactions (immutable facts), Master Records (entity properties), Master Data (classification structures).

    We’ve covered why embedding structures in transactions creates architectural and business challenges. As I explained in The Three-Layer Architecture Pattern, separating these layers enables strategic flexibility.

    Now: Master Data specifically. Because this is the layer that the Authority Rule changes the most. Reference Data is not Master Data – that’s easy to understand. Master Data as Structure – can be a bit harder to get your head around.

    What Master Data Actually Is

    Master Data is structures, not entities.

    Not individual customers. Not specific products. Not particular locations, those are Master Records. Instead, Master Data is the organizational framework you use to classify and organize those entities.

    Classification structures: Territory hierarchies that organize customers by strategic importance and geographic coverage. Product taxonomies that group items by function, market, or business line. Org structures that define teams, divisions, and reporting relationships. Customer segmentation models that classify accounts by revenue potential, service requirements, or strategic value.

    Relationship structures: Party-to-location assignments that map which customers use which shipping addresses. Cross-system mappings that connect CRM-12345 = ERP-98765 = same real-world entity. Corporate hierarchies that define subsidiary relationships, ownership structures, and consolidation rules.

    Operating structures: Payment terms (Net30, Net60, Net90) – the standardized payment timing options you offer. Service tiers (Bronze, Silver, Gold, Platinum) – the standardized service levels you’ve structured. Party involvement types (Bill-to, Ship-to, Payee, Payer) – the standardized operational roles in your business processes. Delivery methods (Standard, Express, International) – the standardized fulfillment options you provide.

    These structures are Master Data. Not because they’re “important data.” Not because they’re “golden records.” Because they’re organizational frameworks that management controls and they change when the strategy changes. As I explained in Master Data vs Reference Data, authority determines classification.

    What Master Data Is NOT

    This distinction matters.

    Master Data is NOT individual entities. Customer #12345 is a Master Record. Product SKU-789 is a Master Record. Ship-to Location SL-10001 is a Master Record. Employee S-456 is a Master Record.

    Master Data is NOT entity properties. Customer’s address is a Master Record property. Product specifications are Master Record properties. Location delivery hours are Master Record properties. Employee hire date is a Master Record property.

    Master Data is NOT transaction data. The sales order is a Transactional Record, governed by Contract Authority (the agreement). The invoice and payment are also Transactional Records.

    Master Data is the STRUCTURES organizing these entities. The hierarchies. The classifications. The mappings. The assignment rules. Not the things themselves. The frameworks for organizing things.

    As I explained in Master Data, Master Records, and Transactions: The Distinction That Clarifies Everything, authority type determines how data changes. Master Data has internal strategic authority – management controls these structures and changes them when strategy changes. Territory structures, product taxonomies, org hierarchies – all under management’s authority.

    The Three Structure Types In Detail

    Classification structures organize entities into hierarchies.

    Territory hierarchy example:

    Enterprise Sales├── North Region│   ├── Strategic Accounts│   │   └── Criteria: Revenue > $1M, Country = US/CA/UK│   └── Growth Accounts│       └── Criteria: Revenue $250K-$1M└── South Region    └── Emerging Markets```

    Management controls this structure. They can reorganize, split the North Region into North Enterprise and North Mid-Market, combine the South Region with Emerging Markets, and redefine the Strategic Accounts threshold from $1M to $5M.

    The structure is mutable, strategic, under management authority.

    Operating structures define how your business operates.

    Payment terms:
    Net30
    Net60
    Net90
    Due on Receipt
    2/10 Net 30

    These are strategic choices. Management decides which payment options to offer, can add new terms (Net45), remove unused terms, change terms based on market conditions.

    Service tiers:

    Bronze: Basic support, 48-hour response
    Silver: Priority support, 24-hour response
    Gold: Dedicated support, 4-hour response
    Platinum: 24/7 support, 1-hour response

    Management defines these tiers, can restructure service levels, add premium tiers, consolidate tiers, change response time commitments.

    The structures define operating reality. Management controls them. They change when strategy changes.

    Relationship structures map connections between entities.

    Party-to-location assignment example:

    Customer: #12345 (Acme Corporation)Ship-to Locations:  - Warsaw plant  - Krakow distribution  - Berlin office  - Prague warehouse  - Budapest facility  - Vienna office  - London headquarters```

    This mapping is Master Data. The customer entity is a Master Record. Each location is a Master Record. The party-to-location relationship structure maps which locations belong to which customer.

    When Acme acquires a new facility or closes an existing one, you update the relationship structure. Customer Master Record unchanged. Location Master Record unchanged (or added/inactivated). Relationship structure updated.

    Real-world entity: Acme Corporation

    Real-world entity: Acme CorporationCRM System: CRM-12345ERP System: ERP-98765Billing System: BILL-45678

    The mapping structure connects these. No golden record needed. Just well-governed mappings maintained as Master Data.

    Why This View Is Different

    I initially thought there might be two types of Master Data. Classification structures (territories, taxonomies) felt different from operating structures (Net30, service tiers). They seemed like separate categories requiring different treatment.

    But working through the Authority Rule framework revealed a pattern: it’s ALL structures. Just structuring in different ways.

    Classification structures organize entities into hierarchies. Operating structures define how your business operates. Relationship structures map connections between entities. All three are organizational frameworks management creates to run the business.

    None of them are the entities themselves. That’s the distinction.

    To my understanding, traditional Master Data Management conflates Master Records (the entities) with Master Data (the structures organizing them). “Master Data Management” becomes “manage all the important stuff” without separating what things are from how you organize them. They want to create ‘golden records’ for these – the focus is on the entities

    This framework separates them. Master Data = structures. Master Records = entities. Different authority. Different mutability. Different governance.

    Authority Determines Mutability

    From Master Data, Master Records, and Transactions: The Distinction That Clarifies Everything: authority type determines how data changes.

    Reference Data (external authority): You can’t redefine ISO 3166 country codes. They change when ISO changes them. Immutable from your perspective.

    Master Data (internal strategic authority): Management controls these structures. They redefine when strategy demands. Territory hierarchies reorganize. Product taxonomies restructure. Service tiers change. Mutable by design.

    Master Records (internal operational authority): Properties change when entity facts change. Customer relocates, revenue grows, status changes. Time-stamped changes, not strategic restructuring.

    As I explored in Contracts: The Fifth Authority Type, Contractual Authority is Bilateral binding agreement that locks specific versions of Master Data and Master Records for the contract duration. Neither party can unilaterally change the bound terms. Courts arbitrate disputes.

    Transactions (captured authority): What happened stays what happened. Immutable truth. Customer bought product for amount on date – that’s a fact that doesn’t change.

    Different authority. Different mutability. Different governance. Keep them separate in architecture.

    Versioning Master Data

    Since Master Data structures change when strategy changes, version them independently from Transactions and Master Records.

    Territory structure v1.0 (effective 2020-01-01 through 2025-12-31):

    Regional Sales containing North Region and South Region.

    Territory structure v2.0 (effective 2026-01-01 onwards):

    Enterprise Sales containing North Region Enterprise and South Region Enterprise, plus Mid-Market Sales containing North Region Mid-Market and South Region Mid-Market.

    Both versions exist. Historical transactions use v1.0 when reporting 2023 data. Current transactions use v2.0 when reporting 2026 data. Trending across the change? System joins transaction date with appropriate structure version automatically.

    No reposting. No migration. Update structure, version it, reporting adapts.

    This works because the transaction never stored the classification. It only stored Customer ID and date. The system determines which territory structure to apply at query time, not storage time. Nothing to convert because nothing was classified when the transaction was captured.

    This is different from the mapping tables rejected in the previous post about restructuring. Those mappings translated between versions of embedded structures—Old_Territory → New_Territory chains that compound forever.

    Instead, these structures ARE Master Data—first-class architectural components that define entity connections. Party-to-location assignments don’t fix bad architecture. They ARE the architecture for managing relationships.

    Does This Architecture Work?

    I haven’t implemented this exact Master Data versioning architecture in a production system. But across insurance, manufacturing, banking – 12+ years as business architect – every restructuring nightmare I’ve seen comes so far from the same pattern: mutable structures embedded in immutable transactions.

    Authority Rule reveals what’s needed: separate by authority type. Master Data (strategic, mutable structures) in one layer. Master Records (operational, time-stamped entities) in another layer. Transactions (captured, immutable facts) in a third layer. When you separate them, restructuring becomes routine instead of exceptional.

    I’m testing this separation principle against real architectural scenarios. Every domain I’ve applied it to validates the pattern. Embedded structures always create a restructuring problem. Separated layers always enable flexibility.

    The framework suggests this is what enables strategic flexibility without breaking operational stability. I’m working to prove it.

    Governance By Authority Type

    Master Data governance is strategic, not operational.

    Who governs Master Data: Management, architecture, business stakeholders who define strategic frameworks. Not operational teams who maintain entity data.

    What they govern: Classification structures, operating standards, relationship mappings. Not individual entity properties.

    How it changes: Strategic decision-making. Territory reorganization requires management approval. Service tier restructuring requires executive sign-off. Cross-system mapping changes require architectural governance.

    Why it matters: Master Data changes affect reporting, analytics, strategic metrics. Changes aren’t just “data updates” – they’re strategic decisions with business impact.

    Contrast with Master Records governance (operational – sales ops maintains customer properties, product managers maintain SKU specs) and Transactional Records governance (none – transactions are immutable).

    Different authority requires different governance. Separate the layers, separate the governance models.

    Why This Matters

    Strategic flexibility requires architectural separation.

    If management can’t restructure territories without 3-month projects, architecture constrains strategy. If product teams can’t reorganize taxonomies without data migration, architecture blocks evolution. If finance can’t redefine cost centers without reposting transactions, architecture creates rigidity.

    Master Data as versioned structures enables strategic flexibility. Restructure when business needs it. Version the change. Reporting adapts automatically. Transactions unchanged. Master Records unchanged. No migration. No reposting.

    Business moves at business speed. Not IT speed.

    That’s not “cleaner data architecture.” That’s competitive advantage. Competitors stuck with embedded structures take months to adapt. You take days. That’s the difference between architecture that enables and architecture that constrains.

    The Test

    Next time someone says “We need to restructure territories,” ask: “How long will it take?”

    If the answer involves reposting transactions or migrating master data, structures are embedded where they shouldn’t be.

    If the answer is “Update the structure, version it, done,” structures are properly separated as Master Data.

    One architecture enables strategy. The other constrains it.


    Next post: Master Records: The Entities Themselves – the entities that Master Data structures organize.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • The Three-Layer Architecture Pattern

    The premise is that embedding Master Data in transactions makes restructuring expensive and brittle.

    Territory hierarchies baked into sales records. Product classifications locked into invoices. Customer segmentations part of financial transactions. When management wants to reorganize, you need either to repost transactions. Or you build mapping tables. Or you accept broken historical reporting.

    The Authority Rule points to a different Architecture. The distinction between master data, master records, and transactions separates what happened from how you classify it.

    Three layers. Separate what happened from how you classify what happened. Separate entity facts from organizational structures. Each layer governed differently, each layer versioned differently, each layer serving its own purpose.

    Keep them separate, and restructuring becomes (hopefully) trivial.

    The Three Layers

    Layer 1: Transactions – What happened. Immutable facts captured at the moment of occurrence.

    Layer 2: Master Records – Entity properties that define specific entities: this customer, this product. Time-stamped so you know what was true when.

    Layer 3: Master Data – Classification structures and operating standards that organize entities. Versioned independently so management can restructure without touching operational data.

    Each layer has different authority. Different mutability. Different governance. The architecture works because you keep them separate and join them only when reporting.

    Layer 1: Transactions – Immutable Facts

    Store the absolute minimum. Entity identifiers, amounts, dates. Reference Data that’s a fact about this specific transaction.

    Transaction ID: TXN-2025-10-15-001Amount: $50,000Customer ID: #12345Product ID: SKU-789Salesperson ID: S-456Date: 2025-10-15Currency: USD

    That’s it. No territory hierarchy. No product classification. No customer segmentation. Just the facts about what happened.

    As I explored in Transactions: Why Embedding Structures Breaks Everything, transactional records have ‘captured authority – the system recorded what happened, and you can’t change what happened. This transaction stays this way forever.

    Everything else – how you classify this sale, which territory you assign it to, which product hierarchy you organize it under – lives in other layers.

    Layer 2: Master Records – Entity Properties

    Time-stamped properties that define specific entities. Customer moved from England to Scotland? Product specifications changed? Location delivery hours updated? Time-stamp the change in the Master Record.

    Customer #12345:

    Effective 2020-03-01 to 2025-09-30:- Country: UK- Address: 45 London Road, ManchesterEffective 2025-10-01 onwards:- Country: UK- Address: 123 Main St, Glasgow

    Every transaction references the entity ID. When reporting, the system joins transaction date with Master Record effective dates automatically. Transaction from June 2025? System uses the English properties. Transaction from November 2025? System uses Scottish properties.

    No reposting. No manual updates. Time-stamping handles it.

    Master Records have “your authority, typically operational” – these are the entities your business operates with, and you control when their properties change. I explore this in depth in Master Records: The Entities Themselves.

    Layer 3: Master Data – Classification Structures

    This is where organizational strategy lives. Territory hierarchies. Product taxonomies. Org structures. Operating standards like payment terms and service tiers. Relationship mappings like party-to-location assignments.

    These structures change when strategy changes. This is why master data is the layer that changes. Version them independently from transactions and Master Records.

    Territory structure effective Q1 2025:

    Enterprise Sales├── North Region│   ├── Strategic Accounts (Revenue > $1M, Country = US/CA/UK)│   └── Growth Accounts (Revenue $250K-$1M)└── South Region    └── Emerging Markets

    Territory structure effective Q1 2026 (strategy shifted):

    Enterprise Sales├── Global Strategic (Revenue > $5M, any country)├── Regional Enterprise (Revenue $1M-$5M)│   ├── North America│   └── EMEA└── Growth Segment (Revenue < $1M)

    Same transactions. Same Master Records. Different classification structure.

    How Reporting Joins the Layers

    Query: “Show me sales by territory for Q4 2025”

    System process:

    1. Transactions layer – Pull all transactions from Q4 2025, get Customer IDs
    2. Master Records layer – Join Customer IDs with properties effective during Q4 2025
    3. Master Data layer – Apply territory rules effective during Q4 2025
    4. Result – Sales classified by territory as it was structured in Q4 2025

    Now run the same query for Q4 2026. The system automatically uses the territory structure effective in Q4 2026. Different classifications. Same transactions. Same Master Records.

    When you restructure territories in January 2026, you update the Master Data layer. Version it. Effective date: 2026-01-01.

    Transactions unchanged. Master Records unchanged. Historical reporting uses old structure. Future reporting uses new structure. Trending across the change? System handles it automatically by joining appropriate versions.

    No reposting. No mapping tables. No data migration. Update the structure, version it, reporting adapts.

    Why This Matters

    The traditional approach embeds structures in transactions. Want to reorganize territories? In several systems, you then have to repost every transaction to reflect the new hierarchy. Or change your Data warehouse. Want to change product taxonomy? Update thousands of records. Want to implement new customer segmentation? Potentially a data migration project, not between systems, between structures.

    Expensive. Slow. Error-prone.

    Three-layer architecture separates immutable facts (transactions) from stable properties (Master Records) from mutable structures (Master Data). Each layer governed independently. Each layer versioned appropriately.

    Management changes strategy? Update Master Data. New effective date. Done. Transactions and Master Records stay untouched. Historical reporting still works. New reporting uses new structure automatically.

    That’s strategic flexibility. That’s operational stability. That’s architecture that enables business to move at business speed rather than IT speed.


    Next post: Master Data: The Layer That Changes – what Master Data actually is, what it contains, and how versioning classification structures independently creates flexibility without chaos.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • When Management Restructures, Systems Break

    Three months to restructure sales territories.

    Not because the business decision is hard. That took one meeting. Sales leadership decided to organize by customer size instead of geography. Split “North Region” into “Enterprise” and “Mid-Market.” Simple strategic shift.

    But every transaction in the system has the old territory embedded in it.

    Change the territory structure, and you break history. So you spend three months (or more) reposting transactions, updating reports, fixing analytics, rebuilding dashboards. All because someone decided to split to split regions differently.

    This happens everywhere. Insurance, banking, manufacturing, government, retail. Management makes a strategic decision in one hour. IT spends three months implementing it.

    Not because the change is complex. Because something is architecturally wrong. Because something is architecturally wrong. As I explored in Master Data vs Reference Data, understanding data authority explains why

    The Pattern Repeats Everywhere

    Territory restructuring: Sales leadership decides to reorganize by customer size instead of geography. IT and Finance spends months reposting every sales transaction with new territory assignments. Historical reports break. Year-over-year comparisons stop working. The 2023 North Region isn’t the same as the 2026 North Region, but the system treats them as identical.

    Product reclassification: Product management decides laptops should be under “Mobile Computing” instead of “Desktop Computing.” IT spends weeks updating the system. Historical product reports show laptops that never existed under “Mobile Computing” in 2023 suddenly appearing there. Data integrity questions follow.

    Org restructure: Executive team reorganizes divisions. Finance spends months reclassifying expenses, updating cost centers, rebuilding management reports. Department budgets from 2023 get compared against completely different organizational structures in 2026. Trending becomes meaningless.

    Customer segmentation: Marketing redefines “Enterprise” vs “Mid-Market” based on new criteria. Analytics team spends weeks recalculating historical segments, updating every customer interaction. Three years of customer behavior reports become unreliable because the definition of “Enterprise” changed halfway through.

    Same problem. Different domains. Different systems.

    Management wants to change how things are classified. IT has to change every transaction that references that classification.

    Every. Single. Transaction.

    What It Actually Costs

    Time cost: Three months to restructure territories. Six months to reclassify products. Quarterly cycles to update cost centers. Strategic decisions get delayed because implementation takes too long. The market moves faster than your systems can adapt.

    Opportunity cost: Management can’t adapt to market changes because they’re stuck with old structures. Change is too expensive. “We’d like to restructure, but IT says it’ll take three months and disrupt operations.” Competitors who can restructure quickly gain advantage. You watch opportunities pass while waiting for IT capacity.

    Quality cost: Reposting introduces errors. Someone makes a mistake in the transformation logic. Historical data loses integrity. Reports stop matching. Finance questions the numbers. “Is this report accurate or did someone mess up the territory mapping?” Trust in data erodes.

    Innovation cost: You can’t experiment with new structures. Too expensive to try. Too expensive to revert if it doesn’t work. Business model evolution gets constrained by technical implementation costs. “Let’s test this new segmentation approach” becomes “Let’s spend three months implementing it first, then see if it works.”

    Strategic cost: Architecture determines what the business can do. Strategy becomes limited by system constraints. The business exists to serve the systems instead of systems serving the business. The tail wags the dog.

    Why Traditional Fixes Don’t Work

    In my experience organizations try three approaches, none which truly solve the problem.

    Fix attempt 1: Just repost everything

    Three months of work every time management restructures. 2023 data gets reclassified using 2026 structure, so you can’t see what territories actually were in 2023. You lose historical truth. Reposting logic has bugs. Data quality degrades. Trust erodes.

    Ultimately unsustainable. Management can’t restructure when business needs it because they have to wait for IT capacity.

    Fix attempt 2: Keep mapping tables

    Create mappings between old structure and new structure. Old “North Region” maps to new “North Enterprise” plus “North Mid-Market.” Now you have two problems: the original embedded structures, plus mapping tables to maintain forever.

    Mappings compound with each version. v1 → v2 mapping, then v2 → v3, then v1 → v3 derived mapping. Exponential complexity. Multi-hop mappings lose precision as the chain of transformations accumulates ambiguity.

    Historical analysis still breaks. You can map forward but lose the original classification. Trending across the transition becomes interpretation, not fact.

    Fix attempt 3: Keep both structures

    Add new fields. Keep old territory, add new territory. Bloat every transaction.

    Sale Transaction (Bloated):Territory_v1: Regional Sales > North RegionTerritory_v2: Enterprise Sales > North Region EnterpriseTerritory_v3: ... (next restructure)Territory_v4: ... (next restructure)

    Schema changes every time management restructures. You can’t add fields retroactively to historical transactions without reprocessing anyway. Reporting becomes a nightmare. Which territory field to use depends on transaction date, creating conditional logic everywhere.

    None of these solve the problem. They just manage the symptoms.

    The Real Question

    Why does changing a classification structure require changing every transaction?

    Why does a strategic decision in one hour require three months of implementation?

    Why does reorganizing territories break historical reports?

    Territory is how you organize customers. Customer is the thing being organized. Why does changing the organization require changing the things?

    When you restructure your filing cabinet, you don’t rewrite every document in it. You reorganize the folders. Documents stay unchanged. You pull them from different locations, but the documents themselves are immutable.

    So why do systems work differently?

    Something Is Architecturally Wrong

    This isn’t a data problem. It’s an architecture problem.

    The structure shouldn’t be embedded in the transaction. The organization shouldn’t be embedded in the thing being organized. The mutable (master data that changes) shouldn’t be embedded in the immutable

    But current systems embed structures everywhere. Territory in every sales transaction. Product category in every order. Org structure in every expense. Customer segment in every interaction.

    When structure changes, everything breaks. The three-layer architecture pattern shows why this happens.

    There must be a better way.

    The Pattern

    Once you notice this pattern, you see it everywhere.

    Any time a strategic restructuring takes months to implement, structures are embedded where they shouldn’t be.

    Any time management says “We’d like to reorganize but…” they’re blocked by architecture constraints.

    Any time historical reports break after a restructuring, immutable facts contain mutable structures.

    The pattern is universal. The cost is enormous. The solution isn’t obvious.

    But the problem is clear: something fundamental about how we build systems is wrong.


    Next post: Why this happens – and what it reveals about how we classify data.


    This post is part of the Systems Thinking Series and how the 5 authority types impact system design. Read the full series at authorityrule.com/

  • Technical Capabilities: Architecture’s Missing Language?

    Architecture is too slow. This is something I think most of us Architects have heard. Not because architects are slow, per se, I think it is because every architect describes things differently.

    You can’t compare implementations if they’re described in incompatible languages. You can’t identify patterns if everyone uses different terms. Its a lot harder to build reusable technical components if requirements are written differently every time.

    I needed a standardized way to specify what technology needs to DO. Independent of vendor or solution. Independent of architect. I couldn’t find one that worked, so I built one.

    Full disclosure: I haven’t read every enterprise architecture book. Frankly, it is enough to keep up with the Business Architecture field and others that interest me. This might exist in TOGAF, ArchiMate, or some framework I haven’t encountered. If it does, tell me. I’d genuinely like to see it. But in my career, I haven’t found it in practice.

    What follows is what I built because I needed it. It’s evolving. Not final. But useful.

    The Problem: What vs How Gets Mixed Up

    Business Capability Models describe what the business does – Order Management, Customer Service, Financial Reporting. But they don’t describe what technology needs to DO to enable those capabilities.

    In my experience, what happens is that people want to add technical capabilities to the Business capability model. They putting “Document Generation” as well as “Risk Assessment” into the Business Capability model causing confusion.

    Additionally, what I have observed is that technology requirements are written differently every time with no reuse, no patterns, just reinvention. They specify vendors instead of capabilities – “System shall use AWS” locks you in before you’ve evaluated alternatives.

    It made me realise technical capabilities needed their own model. Separate from business capabilities.

    What Technical Capabilities Actually Are

    A technical capability describes what technology can do, independent of vendor.

    Not “We use AWS.” Instead: “We need Cloud Storage capability.”

    Not “We implement Azure API Management.” Instead: “We require API Management capability.”

    The capability stays stable. The vendor can change. Think specification, not solution.

    This isn’t developer requirements. Developers need user stories, API contracts, data models, test cases. Those come from business analysts and solution architects. Technical capabilities answer a different question: “What must the platform be able to do?” Not “What should the system do for this user story?”

    Different layers. Both necessary. Not mixed together.

    The Framework: 12 Core Domains

    I’ve been developing a Technical Capability Model that organizes capabilities into 12 core domains, spanning everything from Data & Analytics through Emerging Technologies. Each domain breaks down into capability groups. Each group contains specific capabilities.

    Three levels:

    • Level 1: Domain (Data & Analytics)
    • Level 2: Capability Group (Data Quality Management)
    • Level 3: Specific Capability (Data Quality Rules Engine)

    Each capability includes what it does and why it matters. Approximately 300 technical capabilities total.

    [LINK: Technical Capability Model download]

    The framework is freely available. Standardized specifications help everyone. If you find it useful, tell me how you’re applying it.

    After I developed the model, for identifying what a system needed to be able to do, I realised the Technical Capability Model I had developed could be used for other things.

    Domain-Specific Extensions

    Standard technical capabilities apply everywhere – API Management, Data Storage, Workflow Orchestration. But some domains need specialized capabilities.

    Banking needs SWIFT message processing. Healthcare needs HL7 message processing. Manufacturing needs supply chain integration capabilities. The framework extends with domain-specific capability groups while maintaining core structure.

    Take the 12 core domains. Add your industry-specific extensions. Remove capabilities you’ll never use. The aim is standardization within a domain, without forcing every industry into the same mold.

    Requirements vs Capabilities: The Distinction That Changes Everything

    Here’s the problem with most requirements: they describe what the system needs to DO for this specific use case.

    “System shall calculate price and applicable taxes for customer orders.” “System shall validate customer address against postal database.” “System shall generate contracts in PDF format.”

    Those are use-case requirements. Specific to this project. Not reusable. Worse: they mix business logic with technical capabilities.

    “Calculate price and taxes” is business logic. What technical capabilities could that potentially require?

    • Business Rules Management (TC-8.2) – to apply pricing rules and tax calculations
    • Data Processing & Preparation (TC-1.4) – to transform input data
    • Data Aggregation (TC-1.4.3) – to sum price components
    • API Management (TC-3.1) – if calling external tax calculation services

    The Technical Capability Model forces clarity. You can’t write “System shall calculate price.” You have to specify which technical capabilities the system needs to be able to do that calculation.

    Instead of: “System shall validate customer address” Write: “System requires: Data Quality Management (TC-1.5), API Management (TC-3.1) for postal validation service integration”

    Instead of: “System shall generate contracts” Write: “System requires: Document Generation (TC-11.2), Template Management (TC-11.2.1), Multi-Format Output (TC-11.2.3)”

    Now those technical capabilities are reusable. Next project needs document generation? Same capabilities, different templates, different use case.

    The capabilities don’t change. The business application of them does.

    Why This Matters: The Reuse Advantage

    Traditional procurement:

    Project A writes requirements: “System shall calculate prices with promotional discounts.”

    Project B writes requirements: “System shall calculate shipping costs with volume discounts.”

    Project C writes requirements: “System shall calculate subscription prices with tiered pricing.”

    All three procurements specify rules engines. All three write requirements from scratch. No one realizes they’re buying the same capability three times.

    Capability-based procurement:

    All three projects identify: Business Rules Management (TC-8.2) required. Enterprise architecture says: “We’re procuring rules engine capability once.” Platform team evaluates vendors against standard capability requirements. Three projects use the same rules engine for different business logic. Specify once. Procure once. Reuse everywhere.

    That’s not constraint. That’s clarity enabling efficiency.

    Technical capabilities don’t constrain business logic. They enable it. “Calculate price and taxes” is business logic – different for every organization, often different for every product line. But the technical capability, Business Rules Management, is the same. Authoring rules, testing rules, versioning rules, deploying rules – that’s what technology needs to be able to DO.

    WHAT rules you write? Business logic.

    WHERE you write those rules? Technical capability.

    Business teams define logic. Architecture teams specify technical capabilities needed to execute that logic. Procurement teams find vendors who provide those capabilities. Different responsibilities. Clear boundaries.

    The Objections (Let’s Address Them Now)

    If you’re an architect, you’re already thinking of problems with this approach. Let’s address them directly.

    Objection 1: “You’re constraining how vendors solve problems”

    No. We’re forcing clarity about what needs solving.

    Without capability specifications: RFP says “System shall handle pricing and taxes.” Five vendors interpret that five different ways. One has no rules engine, hard-codes logic. One has sophisticated rules engine you’ll never use. Three have different rules engines with incompatible capabilities.

    With capability specifications: RFP says “System requires Business Rules Management (TC-8.2) including Rules Authoring, Version Control, Rule Testing.” All vendors know exactly what’s required. They propose solutions that provide those capabilities. You evaluate HOW they provide them. Vendors compete on implementation quality, not interpretation.

    You’re not constraining solutions. You’re clarifying requirements. Vendors still choose how to implement. You just specified what must work when they’re done.

    Objection 2: “This is just another framework to maintain”

    Fair. But you’re already maintaining something.

    Either you’re maintaining inconsistent architecture documentation where every architect describes things differently, or you’re maintaining consistent documentation using a standard catalog. One enables comparison and reuse. One doesn’t.

    Pick your maintenance burden.

    Objection 3: “What if I need a capability that’s not in your catalog?”

    Add it.

    This is V0.5. It’s not complete. It’s not final. If you need a capability that doesn’t exist in the framework, that’s useful information. Either I missed something obvious (tell me, I’ll add it), you’ve found a domain-specific capability (extend the framework), or you’re confusing business logic with technical capability (happens often).

    The catalog evolves through use. That’s the point of sharing it.


    How You Might Use This

    Vendor evaluation: List required technical capabilities. Evaluate which vendors provide them. Identify gaps before concluding the procurement. No more “we discovered this limitation after contract signing.”

    Functional requirements: Specify capabilities, not solutions. Then choose solutions that provide those capabilities.

    Current state documentation: Document which technical capabilities exist today. Identify gaps. Identify redundancies. Compare systems using common language. No more “we have three different things that do basically the same job but we can’t tell because they’re described differently.”

    Platform design: Specify which technical capabilities the platform must provide. Teams build on platform capabilities. Clear boundaries between platform and applications.

    Pattern recognition: If every architecture uses the same capability catalog, patterns become visible. “This type of module consistently requires these capabilities.” Patterns become reusable specifications.

    Different architects still have different opinions. But now they’re disagreeing about the same things, using the same language. That’s progress.

    What This Enables

    Architecture reviews: Faster. Common language.

    Vendor comparisons: Clearer. Same evaluation criteria.

    Current state documentation: Comparable. Standard format.

    Requirements: Portable. Capability-based, not solution-based.

    Patterns: Reusable. Described consistently.

    Architecture accelerates. Not by working faster, but by working with standardized, reusable specifications.

    Get the Framework

    The complete Technical Capability Model includes all 12 domains, approximately 300 capabilities, with descriptions and example technologies.

    [LINK: Technical Capability Model download]

    I’m sharing it because standardized specifications help everyone. If you find it useful, tell me how you’re applying it. If you find gaps, tell me what’s missing.

    This framework evolves through use, not theory.

    What Comes Next

    This is a working framework. I’m testing it. Refining domains. Adjusting definitions. Adding capabilities I didn’t realize I needed.

    Maybe someone will tell me this exists in chapter 7 of an enterprise architecture book I haven’t read. Great. Point me to it. But right now, this is what I built because I needed it.

    Take this framework. Adapt it. Make it work for your context. Add your domain-specific capabilities. Remove what you don’t need. The value isn’t in my specific catalog. The value is in having A catalog that everyone where you work uses consistently.

    Architecture needs standardized technical capabilities. Not to eliminate creativity. To eliminate rework.

    When architects speak the same language, architecture accelerates. A Technical Capability Model is one way to provide that language.


    This post is part of my Architecture Foundations series exploring systematic approaches to enterprise architecture. Read more at authorityrule.com/

  • Standards Enforcement: Build vs Buy Makes No Difference

    Post 9 in The Authority Rule Series

    Last post covered buying systems from vendors who don’t use ISO standards. The hidden costs, the mapping tables, the technical debt that compounds forever.

    You might think: “We’ll build it ourselves. Then we control the outcome.”

    You do. If you enforce standards.

    Most teams don’t.


    The Build vs Buy Parallel

    When you buy a vendor system that doesn’t use ISO standards, you inherit their technical debt. You discover it during implementation or years later, too late to walk away.

    When you build a system in-house that doesn’t use ISO standards, you create your own technical debt. Often, you don’t discover it until integration, or audit, or years later.

    Same problem. Different source.

    The advantage of building in-house: You can prevent it. The reality: Most teams don’t.

    What Should Happen (But Often Doesn’t)

    When you build a bridge, the structural engineer specifies materials before construction starts. Grade 50 steel. Class 8.8 bolts. 40MPa concrete. If those materials can’t deliver the architect’s design, they sort it out before building, not after.

    In my experience software development rarely works this way.

    Business requirement: “The system needs to handle international customers.” Business Analyst writes requirements. Developer builds it. Using what country codes?

    Often, no one specifies. The developer makes a choice – maybe ISO 3166, maybe a custom list, maybe whatever dropdown library they found online, maybe just a text field. You discover the problem later during integration testing, when the compliance team asks why your system defines countries differently than the third-party screening service, or during the first regulatory audit.

    Or you never discover it. You just accumulate technical debt and wonder why integration is always harder than thought it would be.

    Why This Happens (And Why I Missed It)

    I assumed developers would use ISO standards for countries, currencies, languages. Why wouldn’t they?

    Maybe ISO standards cost money? Maybe developers don’t know they exist? Maybe “I only need 10 countries for this system” seems simpler than researching the standard? Maybe “Micronesia (Federated States)” versus “Federated States of Micronesia” doesn’t seem like it matters when you’re just building a dropdown?

    I’ve now encountered at least three separate systems, over my career, where vendors or development teams used custom codes instead of ISO standards. Each time, I was surprised. Maybe I’m blind to this. Maybe my background made it invisible.

    But the evidence is clear: there’s no profession-wide expectation that reference data means ISO standards, or an industry standard. No enforcement mechanism. No licensing requirement. No inspector who fails the build. Developers make choices under time pressure. Sometimes they choose standards. Sometimes they choose “good enough for now.”

    “Now” becomes forever, and when they don’t use standards, you also pay forever.

    Role Accountability That Could Work

    I’ve been thinking about what could improve the situation and I think the answer is in clearer role accountability making standards enforceable. Here’s what it could look like.

    Architects define which standards apply. ISO 3166 for countries. ISO 4217 for currencies. ISO 639 for languages. Industry standards like ACORD for insurance, NAICS for industry classification. Not suggestions—requirements or patterns. Documented, findable, required.

    Developers use the standards. When building country fields, ISO 3166. When building currency fields, ISO 4217. No custom codes. No “we’ll fix it later.” If the standard isn’t documented, ask Architecture which one applies. Don’t invent your own. Code reviews check compliance. Architecture violations aren’t style issues—they’re technical debt with compounding costs.

    Business Analysts reference standards in requirements. Most of them don’t, I didn’t the times I worked as a business analyst earlier in my career. They write “system shall capture customer country” instead of “system shall use ISO 3166-1 alpha-2 codes.” User stories should reference standards explicitly. Acceptance criteria should verify compliance.

    QA verifies compliance. Test data uses correct codes. System validation enforces standards. If country codes don’t match ISO 3166, that’s a defect—flag it . Most QA plans don’t check for this.

    When You Actually Need Custom Codes

    Sometimes you genuinely need translations between standards and legacy reality.

    Legacy system migration: Old system used “GB” to mean Great Britain. ISO 3166 uses “GB” for United Kingdom. During migration, you need to handle both. Time-boxed. Documented. Translation layer clearly identified. Removed post-migration.

    Vendor integration: External system sends “Micronesia” as free text. You can’t change their system. Build a translation layer at the integration boundary. Map to ISO internally. Keep the non-compliant data at the edge.

    Customer-facing display: ISO code “GB” displays as “United Kingdom” in the UI. That’s presentation layer, not data layer. ISO standard underneath, friendly text on top.

    But translations live in specific places—integration boundaries, presentation layers. They don’t pollute the core data model.

    What Happens Without Enforcement

    Developer A uses “SV” for Sweden (thinking Svenskt). Developer B uses “SE” for Sweden (ISO correct). Developer C uses “Sweden” as free text because the field was varchar(50). Developer D uses “Sve” because they needed varchar(3) and didn’t know ISO existed.

    Now you have four definitions of Sweden across your systems.

    You just rebuilt the vendor problem from the previous post. Except this time, you can’t blame the vendor. You built it yourself, one developer at a time, one undocumented decision at a time.

    Years later, someone tries to integrate these systems. Mapping tables everywhere. Transformation logic. Test case multiplication. All preventable. If anyone had enforced standards before the first line of code shipped.

    The Cost of “We’ll Fix It Later”

    “We’ll standardize it when we integrate.” No, you won’t. Integration deadlines are tight. You’ll build mapping tables and move on.

    “We’ll clean it up in the next release.” No, you won’t. The next release has new features. Data cleanup never makes the roadmap.

    “We’ll address it during the replatforming project.” Maybe. If the replatforming project happens. If it doesn’t get cancelled when costs overrun. If the team remembers why this matters.

    Technical debt compounds. Every month, more data gets created using non-standard codes. More integration points get built assuming those codes. More reports get written around that structure. The cost of fixing it grows. The willingness to fix it shrinks.

    Eventually, it becomes “how we do things here.” Accepted. Documented. Permanent.

    You could have prevented it on day one.

    What You Can Do

    If you’re an Architect: Document reference data standards for your domain. Make them findable. Make them required. Review compliance at appropriate times.

    If you’re a Developer: Don’t create custom reference data codes. Ask which standard applies. Use it. If no standard is documented, request one before you build.

    If you’re a Business Analyst: Reference standards explicitly in requirements. “System shall use ISO 3166 for countries” not “system shall have a country field.” Make standards compliance part of acceptance criteria.

    If you’re in QA: Add standards compliance to test plans. Verify codes match ISO or documented industry standards. Flag non-compliance as defects.

    If you lead development: Enforce standards compliance in code reviews. Make it clear that architecture violations create technical debt, and technical debt has ongoing costs.

    If you’re buying AND building: The previous post showed how you could evaluate vendors before you sign contracts. This post shows how to enforce standards before you write code. Same goal: prevent technical debt at the source, not manage it forever.

    Vendors who don’t use standards cost you. Developers who don’t use standards cost you. You can choose better vendors. You can enforce better standards.

    Make both choices.

    Building in-house gives you control.

    But control without enforcement just means you own the technical debt instead of inheriting it. The choice is yours. Make it before the first line of code ships.

    Because after that, you’re just managing consequences.


    This post is part of The Authority Rule series exploring how authority patterns determine data classification and governance. The framework emerged from real-world experience building enterprise data architecture across multiple industries.

    Read the full series at authorityrule.com/

  • Contracts: The Fifth Authority Type

    Post 8 of The Authority Rule Series


    Something didn’t fit.

    I’d mapped out four authority types: Reference Data (external authority), Master Data (internal strategic), Master Records (internal operational), and Transactions (captured events). The framework worked well—until I hit contracts. Then nothing lined up cleanly.

    Contracts.

    They’re not quite transactions.

    A payment transaction records an event—it happened on 15 December. A contract defines terms that govern the behavior, i.e., when and how often payments are due.

    Contracts are a fifth authority type: Contractual Authority.

    What Makes Contractual Authority Different

    Contractual authority is bilateral or multilateral binding agreement.

    Neither party can unilaterally change it. Your customer can’t rewrite the terms. You can’t rewrite them either.

    Both must follow what was signed. Courts arbitrate disputes—when parties disagree on what the contract means, external authority interprets. Just like ISO interprets currency standards, courts interpret contractual terms.

    It creates shared authoritative terms. Within the relationship, the contract functions as reference data both parties must follow. But unlike ISO standards, it only binds the signatories.

    It’s immutable as a document.

    Why This Matters in Insurance

    I work in P&C insurance.

    This distinction is fundamental to how we operate.

    The policy document is contractual authority. It defines coverage, exclusions, limits, terms.

    Once signed, both insurer and insured must follow it. Claims adjusters can’t just decide to pay something the policy excludes—they’re bound by the policy terms. Courts interpret ambiguous clauses when disputes arise. The contract governs what happens, not internal policy changes made after signing.

    Premium payments are transactions.

    They record what happened under the policy terms. Insured paid £500 on 1 January—that’s captured authority, it happened.

    Claims are transactions governed by contractual authority. The claim event happened (transaction). Whether it’s covered depends on the policy terms (contractual authority).

    Coverage definitions start as master data. Before the policy is sold, your coverage tiers, exclusions lists, and limits are internal master data. Management can change them tomorrow for new policies.

    But once signed into a policy, they become contractual authority.

    Now you can’t unilaterally change those definitions for this customer.

    When Master Data Transforms

    I see this transformation constantly.

    Your product codes are master data. You control your product hierarchy. Management can restructure it tomorrow for new customers.

    But write those codes into a multi-year supplier agreement?

    Now they’re contractual authority. You can’t just “update the product hierarchy” without renegotiating contracts.

    Your pricing structure is master data.

    You decide how to price your services. But sign a three-year SLA with fixed prices? Those prices are contractual authority—you’re bound by what you signed.

    Your service levels are master data. You define what “standard support” means internally. Include those definitions in customer contracts? They’re now contractual authority. The customer can hold you to your own definitions. Not your updated definitions—the ones you agreed to.

    Your coverage tiers in insurance are master data internally. Define Bronze, Silver, Gold however you want. But underwrite a policy using those tiers? Contractual authority for that policy’s lifetime.

    The Governance Trap

    Companies miss this transformation all the time.

    They see “master data change.”

    They miss “contractual authority impact.”

    Insurance company decides to update their household policy terms to exclude flood damage in high-risk zones. Sounds like a master data change—management decision, internal authority. But tens of thousands of existing policies have already been sold. Those are contractual authority. You can’t retroactively change coverage for signed policies—the old terms remain in force until renewal.

    What looks like a simple master data update is actually dual governance.

    New master data (updated policy template). Existing contractual authority (signed policies unchanged). You’re managing two versions until policies renew.

    Software company restructures product hierarchy—”Enterprise” tier splits into “Enterprise Standard” and “Enterprise Plus.” Management approved. Seems straightforward.

    But hundreds of enterprise customers have contracts.

    Those contracts reference “Enterprise” with specific feature lists. Contractual authority.

    You need contract amendments or grandfather clauses—not just a product hierarchy update.

    The Test for Contractual Authority

    Can you unilaterally redefine this tomorrow?

    Your product hierarchy → Yes, management decides → Master Data

    ISO currency codes → No, ISO decides → Reference Data

    Signed contract terms → No, both parties agreed → Contractual Authority. The signed contract happened at a point in time—an immutable document, captured like a Transaction. But it creates ongoing binding authority, governs future behavior—unlike a Transaction.

    Do others need to accept your redefinition?

    Your sales territories → No, internal only → Master Data. Signed policy coverage → Yes, customer and courts must accept → Contractual Authority.

    The Five Authority Types

    External Authority (Reference Data):

    • ISO, governments, regulators decide
    • Universal or industry-wide
    • You follow or you’re non-compliant
    • Example: Currency codes, tax rates

    Internal Strategic Authority (Master Data):

    • Management decides
    • Internal to your organization
    • You can redefine unilaterally
    • Example: Sales territories, product hierarchies

    Internal Operational Authority (Master Records):

    • Employees maintain within structures
    • Day-to-day operations
    • Must follow master data and reference data
    • Example: Customer #12345, Product SKU ABC

    Captured Authority (Transactions):

    • What happened at a moment
    • Immutable as an event
    • Records the state of other data types
    • Example: Invoices, payments, claims

    Contractual Authority (Contracts):

    • Bilateral/multilateral binding agreement
    • Neither party can unilaterally change
    • Courts arbitrate disputes
    • Transforms your master data into binding terms
    • Example: Policies, SLAs, supplier agreements

    Each pattern requires different governance. Reference Data: Monitor and comply. Master Data: Management approval and impact analysis.

    Master Records: Operational quality management.

    Transactions: Audit and correction processes. Contractual Authority: Legal review, negotiation, and amendment processes. Don’t treat contract changes like data updates.

    What This Means Monday Morning

    Before signing contracts, review what master data you’re binding.

    Product codes, pricing structures, service definitions—once they’re in signed contracts, they’re no longer just internal master data. You’ve transformed them.

    Distinguish policy documents from policy transactions.

    The policy is contractual authority. Premium payments are transactions under those terms.

    Recognize you’re managing two versions.

    New master data for future contracts. Existing contractual authority for signed agreements. Both need governance, but different governance—don’t confuse updating your product catalog with amending hundreds of customer contracts.

    When updating master data, audit contractual impact. Which signed contracts reference this? Do you need amendments? Grandfather clauses? Migration plans?

    Don’t treat contract changes like data updates.

    Amending contractual authority requires negotiation, not just management approval.

    The PDF Problem

    Here’s where this gets practical.

    The five authority types matter operationally.

    Most contracts are PDFs.

    Unstructured.

    Unqueryable.

    You want to restructure your product hierarchy. Management approves. Seems like a straightforward master data change. But which contracts reference those products?

    You can’t query PDFs.

    Someone has to manually read hundreds of contracts.

    Or worse, you just make the change and hope nothing breaks.

    You can’t ask “which contracts reference our Bronze coverage tier?” You’d have to read every policy document manually. You can’t analyze “what’s the impact of retiring product code PROD-100?” No database knows which contracts bind that code. You can’t audit “how many policies still use the old service level definitions?” The data exists in PDFs, not in queryable systems.

    So when management wants to change master data, you can’t assess contractual authority impact.

    You’re governing blind.

    This is why AI tools that can extract contract data into structured formats like JSON, XML, or something else are becoming more common.

    Tools that read PDFs, extract key terms, and make them queryable. It’s not a perfect solution—retrofitting is messy—but it’s better than nothing.

    Insurance is slowly moving in this direction. Contract extraction tools. Structured policy data initiatives.

    But most industries are still drowning in PDFs, unable to analyze which master data has transformed into contractual authority.

    The framework shows why this matters. Being able to query your contracts would make it actionable.

    Why The Five Types Matter

    The Authority Rule started simple: Can you redefine it? Do others need to accept?

    But every time I used it, I kept finding five patterns, not two: External (you follow), Internal Strategic (management decides), Internal Operational (employees execute), Captured (what happened), Contractual (bilaterally binding). Each pattern requires different governance—you can’t apply the same rules to all five types.

    Reference Data: Monitor and comply.

    Master Data: Management approval and impact analysis. Master Records: Operational quality management. Transactions: Audit and correction processes.

    Contractual Authority: Legal review, negotiation, and amendment processes.

    Get the authority pattern right, and governance becomes clear. Treat a signed contract like master data that you can change? Expect lawsuits. Treat your internal product hierarchy like contractual authority requiring customer approval? Paralysis.

    This framework isn’t about data formats. It’s about authority patterns.

    And contracts proved I needed a fifth type to capture bilateral binding authority.


    The Authority Rule: Can you redefine what this means, and do you need others to accept your definition? The answer reveals which authority pattern you’re dealing with.