Tag: Vendor Evaluation

  • Technical Capabilities: Architecture’s Missing Language?

    Architecture is too slow. This is something I think most of us Architects have heard. Not because architects are slow, per se, I think it is because every architect describes things differently.

    You can’t compare implementations if they’re described in incompatible languages. You can’t identify patterns if everyone uses different terms. Its a lot harder to build reusable technical components if requirements are written differently every time.

    I needed a standardized way to specify what technology needs to DO. Independent of vendor or solution. Independent of architect. I couldn’t find one that worked, so I built one.

    Full disclosure: I haven’t read every enterprise architecture book. Frankly, it is enough to keep up with the Business Architecture field and others that interest me. This might exist in TOGAF, ArchiMate, or some framework I haven’t encountered. If it does, tell me. I’d genuinely like to see it. But in my career, I haven’t found it in practice.

    What follows is what I built because I needed it. It’s evolving. Not final. But useful.

    The Problem: What vs How Gets Mixed Up

    Business Capability Models describe what the business does – Order Management, Customer Service, Financial Reporting. But they don’t describe what technology needs to DO to enable those capabilities.

    In my experience, what happens is that people want to add technical capabilities to the Business capability model. They putting “Document Generation” as well as “Risk Assessment” into the Business Capability model causing confusion.

    Additionally, what I have observed is that technology requirements are written differently every time with no reuse, no patterns, just reinvention. They specify vendors instead of capabilities – “System shall use AWS” locks you in before you’ve evaluated alternatives.

    It made me realise technical capabilities needed their own model. Separate from business capabilities.

    What Technical Capabilities Actually Are

    A technical capability describes what technology can do, independent of vendor.

    Not “We use AWS.” Instead: “We need Cloud Storage capability.”

    Not “We implement Azure API Management.” Instead: “We require API Management capability.”

    The capability stays stable. The vendor can change. Think specification, not solution.

    This isn’t developer requirements. Developers need user stories, API contracts, data models, test cases. Those come from business analysts and solution architects. Technical capabilities answer a different question: “What must the platform be able to do?” Not “What should the system do for this user story?”

    Different layers. Both necessary. Not mixed together.

    The Framework: 12 Core Domains

    I’ve been developing a Technical Capability Model that organizes capabilities into 12 core domains, spanning everything from Data & Analytics through Emerging Technologies. Each domain breaks down into capability groups. Each group contains specific capabilities.

    Three levels:

    • Level 1: Domain (Data & Analytics)
    • Level 2: Capability Group (Data Quality Management)
    • Level 3: Specific Capability (Data Quality Rules Engine)

    Each capability includes what it does and why it matters. Approximately 300 technical capabilities total.

    [LINK: Technical Capability Model download]

    The framework is freely available. Standardized specifications help everyone. If you find it useful, tell me how you’re applying it.

    After I developed the model, for identifying what a system needed to be able to do, I realised the Technical Capability Model I had developed could be used for other things.

    Domain-Specific Extensions

    Standard technical capabilities apply everywhere – API Management, Data Storage, Workflow Orchestration. But some domains need specialized capabilities.

    Banking needs SWIFT message processing. Healthcare needs HL7 message processing. Manufacturing needs supply chain integration capabilities. The framework extends with domain-specific capability groups while maintaining core structure.

    Take the 12 core domains. Add your industry-specific extensions. Remove capabilities you’ll never use. The aim is standardization within a domain, without forcing every industry into the same mold.

    Requirements vs Capabilities: The Distinction That Changes Everything

    Here’s the problem with most requirements: they describe what the system needs to DO for this specific use case.

    “System shall calculate price and applicable taxes for customer orders.” “System shall validate customer address against postal database.” “System shall generate contracts in PDF format.”

    Those are use-case requirements. Specific to this project. Not reusable. Worse: they mix business logic with technical capabilities.

    “Calculate price and taxes” is business logic. What technical capabilities could that potentially require?

    • Business Rules Management (TC-8.2) – to apply pricing rules and tax calculations
    • Data Processing & Preparation (TC-1.4) – to transform input data
    • Data Aggregation (TC-1.4.3) – to sum price components
    • API Management (TC-3.1) – if calling external tax calculation services

    The Technical Capability Model forces clarity. You can’t write “System shall calculate price.” You have to specify which technical capabilities the system needs to be able to do that calculation.

    Instead of: “System shall validate customer address” Write: “System requires: Data Quality Management (TC-1.5), API Management (TC-3.1) for postal validation service integration”

    Instead of: “System shall generate contracts” Write: “System requires: Document Generation (TC-11.2), Template Management (TC-11.2.1), Multi-Format Output (TC-11.2.3)”

    Now those technical capabilities are reusable. Next project needs document generation? Same capabilities, different templates, different use case.

    The capabilities don’t change. The business application of them does.

    Why This Matters: The Reuse Advantage

    Traditional procurement:

    Project A writes requirements: “System shall calculate prices with promotional discounts.”

    Project B writes requirements: “System shall calculate shipping costs with volume discounts.”

    Project C writes requirements: “System shall calculate subscription prices with tiered pricing.”

    All three procurements specify rules engines. All three write requirements from scratch. No one realizes they’re buying the same capability three times.

    Capability-based procurement:

    All three projects identify: Business Rules Management (TC-8.2) required. Enterprise architecture says: “We’re procuring rules engine capability once.” Platform team evaluates vendors against standard capability requirements. Three projects use the same rules engine for different business logic. Specify once. Procure once. Reuse everywhere.

    That’s not constraint. That’s clarity enabling efficiency.

    Technical capabilities don’t constrain business logic. They enable it. “Calculate price and taxes” is business logic – different for every organization, often different for every product line. But the technical capability, Business Rules Management, is the same. Authoring rules, testing rules, versioning rules, deploying rules – that’s what technology needs to be able to DO.

    WHAT rules you write? Business logic.

    WHERE you write those rules? Technical capability.

    Business teams define logic. Architecture teams specify technical capabilities needed to execute that logic. Procurement teams find vendors who provide those capabilities. Different responsibilities. Clear boundaries.

    The Objections (Let’s Address Them Now)

    If you’re an architect, you’re already thinking of problems with this approach. Let’s address them directly.

    Objection 1: “You’re constraining how vendors solve problems”

    No. We’re forcing clarity about what needs solving.

    Without capability specifications: RFP says “System shall handle pricing and taxes.” Five vendors interpret that five different ways. One has no rules engine, hard-codes logic. One has sophisticated rules engine you’ll never use. Three have different rules engines with incompatible capabilities.

    With capability specifications: RFP says “System requires Business Rules Management (TC-8.2) including Rules Authoring, Version Control, Rule Testing.” All vendors know exactly what’s required. They propose solutions that provide those capabilities. You evaluate HOW they provide them. Vendors compete on implementation quality, not interpretation.

    You’re not constraining solutions. You’re clarifying requirements. Vendors still choose how to implement. You just specified what must work when they’re done.

    Objection 2: “This is just another framework to maintain”

    Fair. But you’re already maintaining something.

    Either you’re maintaining inconsistent architecture documentation where every architect describes things differently, or you’re maintaining consistent documentation using a standard catalog. One enables comparison and reuse. One doesn’t.

    Pick your maintenance burden.

    Objection 3: “What if I need a capability that’s not in your catalog?”

    Add it.

    This is V0.5. It’s not complete. It’s not final. If you need a capability that doesn’t exist in the framework, that’s useful information. Either I missed something obvious (tell me, I’ll add it), you’ve found a domain-specific capability (extend the framework), or you’re confusing business logic with technical capability (happens often).

    The catalog evolves through use. That’s the point of sharing it.


    How You Might Use This

    Vendor evaluation: List required technical capabilities. Evaluate which vendors provide them. Identify gaps before concluding the procurement. No more “we discovered this limitation after contract signing.”

    Functional requirements: Specify capabilities, not solutions. Then choose solutions that provide those capabilities.

    Current state documentation: Document which technical capabilities exist today. Identify gaps. Identify redundancies. Compare systems using common language. No more “we have three different things that do basically the same job but we can’t tell because they’re described differently.”

    Platform design: Specify which technical capabilities the platform must provide. Teams build on platform capabilities. Clear boundaries between platform and applications.

    Pattern recognition: If every architecture uses the same capability catalog, patterns become visible. “This type of module consistently requires these capabilities.” Patterns become reusable specifications.

    Different architects still have different opinions. But now they’re disagreeing about the same things, using the same language. That’s progress.

    What This Enables

    Architecture reviews: Faster. Common language.

    Vendor comparisons: Clearer. Same evaluation criteria.

    Current state documentation: Comparable. Standard format.

    Requirements: Portable. Capability-based, not solution-based.

    Patterns: Reusable. Described consistently.

    Architecture accelerates. Not by working faster, but by working with standardized, reusable specifications.

    Get the Framework

    The complete Technical Capability Model includes all 12 domains, approximately 300 capabilities, with descriptions and example technologies.

    [LINK: Technical Capability Model download]

    I’m sharing it because standardized specifications help everyone. If you find it useful, tell me how you’re applying it. If you find gaps, tell me what’s missing.

    This framework evolves through use, not theory.

    What Comes Next

    This is a working framework. I’m testing it. Refining domains. Adjusting definitions. Adding capabilities I didn’t realize I needed.

    Maybe someone will tell me this exists in chapter 7 of an enterprise architecture book I haven’t read. Great. Point me to it. But right now, this is what I built because I needed it.

    Take this framework. Adapt it. Make it work for your context. Add your domain-specific capabilities. Remove what you don’t need. The value isn’t in my specific catalog. The value is in having A catalog that everyone where you work uses consistently.

    Architecture needs standardized technical capabilities. Not to eliminate creativity. To eliminate rework.

    When architects speak the same language, architecture accelerates. A Technical Capability Model is one way to provide that language.


    This post is part of my Architecture Foundations series exploring systematic approaches to enterprise architecture. Read more at authorityrule.com/

  • Standards Enforcement: Build vs Buy Makes No Difference

    Post 9 in The Authority Rule Series

    Last post covered buying systems from vendors who don’t use ISO standards. The hidden costs, the mapping tables, the technical debt that compounds forever.

    You might think: “We’ll build it ourselves. Then we control the outcome.”

    You do. If you enforce standards.

    Most teams don’t.


    The Build vs Buy Parallel

    When you buy a vendor system that doesn’t use ISO standards, you inherit their technical debt. You discover it during implementation or years later, too late to walk away.

    When you build a system in-house that doesn’t use ISO standards, you create your own technical debt. Often, you don’t discover it until integration, or audit, or years later.

    Same problem. Different source.

    The advantage of building in-house: You can prevent it. The reality: Most teams don’t.

    What Should Happen (But Often Doesn’t)

    When you build a bridge, the structural engineer specifies materials before construction starts. Grade 50 steel. Class 8.8 bolts. 40MPa concrete. If those materials can’t deliver the architect’s design, they sort it out before building, not after.

    In my experience software development rarely works this way.

    Business requirement: “The system needs to handle international customers.” Business Analyst writes requirements. Developer builds it. Using what country codes?

    Often, no one specifies. The developer makes a choice – maybe ISO 3166, maybe a custom list, maybe whatever dropdown library they found online, maybe just a text field. You discover the problem later during integration testing, when the compliance team asks why your system defines countries differently than the third-party screening service, or during the first regulatory audit.

    Or you never discover it. You just accumulate technical debt and wonder why integration is always harder than thought it would be.

    Why This Happens (And Why I Missed It)

    I assumed developers would use ISO standards for countries, currencies, languages. Why wouldn’t they?

    Maybe ISO standards cost money? Maybe developers don’t know they exist? Maybe “I only need 10 countries for this system” seems simpler than researching the standard? Maybe “Micronesia (Federated States)” versus “Federated States of Micronesia” doesn’t seem like it matters when you’re just building a dropdown?

    I’ve now encountered at least three separate systems, over my career, where vendors or development teams used custom codes instead of ISO standards. Each time, I was surprised. Maybe I’m blind to this. Maybe my background made it invisible.

    But the evidence is clear: there’s no profession-wide expectation that reference data means ISO standards, or an industry standard. No enforcement mechanism. No licensing requirement. No inspector who fails the build. Developers make choices under time pressure. Sometimes they choose standards. Sometimes they choose “good enough for now.”

    “Now” becomes forever, and when they don’t use standards, you also pay forever.

    Role Accountability That Could Work

    I’ve been thinking about what could improve the situation and I think the answer is in clearer role accountability making standards enforceable. Here’s what it could look like.

    Architects define which standards apply. ISO 3166 for countries. ISO 4217 for currencies. ISO 639 for languages. Industry standards like ACORD for insurance, NAICS for industry classification. Not suggestions—requirements or patterns. Documented, findable, required.

    Developers use the standards. When building country fields, ISO 3166. When building currency fields, ISO 4217. No custom codes. No “we’ll fix it later.” If the standard isn’t documented, ask Architecture which one applies. Don’t invent your own. Code reviews check compliance. Architecture violations aren’t style issues—they’re technical debt with compounding costs.

    Business Analysts reference standards in requirements. Most of them don’t, I didn’t the times I worked as a business analyst earlier in my career. They write “system shall capture customer country” instead of “system shall use ISO 3166-1 alpha-2 codes.” User stories should reference standards explicitly. Acceptance criteria should verify compliance.

    QA verifies compliance. Test data uses correct codes. System validation enforces standards. If country codes don’t match ISO 3166, that’s a defect—flag it . Most QA plans don’t check for this.

    When You Actually Need Custom Codes

    Sometimes you genuinely need translations between standards and legacy reality.

    Legacy system migration: Old system used “GB” to mean Great Britain. ISO 3166 uses “GB” for United Kingdom. During migration, you need to handle both. Time-boxed. Documented. Translation layer clearly identified. Removed post-migration.

    Vendor integration: External system sends “Micronesia” as free text. You can’t change their system. Build a translation layer at the integration boundary. Map to ISO internally. Keep the non-compliant data at the edge.

    Customer-facing display: ISO code “GB” displays as “United Kingdom” in the UI. That’s presentation layer, not data layer. ISO standard underneath, friendly text on top.

    But translations live in specific places—integration boundaries, presentation layers. They don’t pollute the core data model.

    What Happens Without Enforcement

    Developer A uses “SV” for Sweden (thinking Svenskt). Developer B uses “SE” for Sweden (ISO correct). Developer C uses “Sweden” as free text because the field was varchar(50). Developer D uses “Sve” because they needed varchar(3) and didn’t know ISO existed.

    Now you have four definitions of Sweden across your systems.

    You just rebuilt the vendor problem from the previous post. Except this time, you can’t blame the vendor. You built it yourself, one developer at a time, one undocumented decision at a time.

    Years later, someone tries to integrate these systems. Mapping tables everywhere. Transformation logic. Test case multiplication. All preventable. If anyone had enforced standards before the first line of code shipped.

    The Cost of “We’ll Fix It Later”

    “We’ll standardize it when we integrate.” No, you won’t. Integration deadlines are tight. You’ll build mapping tables and move on.

    “We’ll clean it up in the next release.” No, you won’t. The next release has new features. Data cleanup never makes the roadmap.

    “We’ll address it during the replatforming project.” Maybe. If the replatforming project happens. If it doesn’t get cancelled when costs overrun. If the team remembers why this matters.

    Technical debt compounds. Every month, more data gets created using non-standard codes. More integration points get built assuming those codes. More reports get written around that structure. The cost of fixing it grows. The willingness to fix it shrinks.

    Eventually, it becomes “how we do things here.” Accepted. Documented. Permanent.

    You could have prevented it on day one.

    What You Can Do

    If you’re an Architect: Document reference data standards for your domain. Make them findable. Make them required. Review compliance at appropriate times.

    If you’re a Developer: Don’t create custom reference data codes. Ask which standard applies. Use it. If no standard is documented, request one before you build.

    If you’re a Business Analyst: Reference standards explicitly in requirements. “System shall use ISO 3166 for countries” not “system shall have a country field.” Make standards compliance part of acceptance criteria.

    If you’re in QA: Add standards compliance to test plans. Verify codes match ISO or documented industry standards. Flag non-compliance as defects.

    If you lead development: Enforce standards compliance in code reviews. Make it clear that architecture violations create technical debt, and technical debt has ongoing costs.

    If you’re buying AND building: The previous post showed how you could evaluate vendors before you sign contracts. This post shows how to enforce standards before you write code. Same goal: prevent technical debt at the source, not manage it forever.

    Vendors who don’t use standards cost you. Developers who don’t use standards cost you. You can choose better vendors. You can enforce better standards.

    Make both choices.

    Building in-house gives you control.

    But control without enforcement just means you own the technical debt instead of inheriting it. The choice is yours. Make it before the first line of code ships.

    Because after that, you’re just managing consequences.


    This post is part of The Authority Rule series exploring how authority patterns determine data classification and governance. The framework emerged from real-world experience building enterprise data architecture across multiple industries.

    Read the full series at authorityrule.com/

  • Is there a hidden cost in vendor contracts: Reference Data?

    Micronesia potentially broke our system.

    Not catastrophically. Just… wrong. A Business Analyst flagged it. One potential data mismatch in a system we’d already used for years, now maybe causing issues with data mapping.

    It reminded me of the Sweden/El Salvador disaster from Post 2. A Swedish developer picked “SV” for Sweden, not knowing ISO had assigned it to El Salvador. Years of mapping tables to fix someone else’s assumption.

    Over the weekend, I started thinking about every procurement process I’d been part of.

    We never evaluated reference data standards.

    Not once.

    What I Evaluated (And What I Missed)

    At EDS, I evaluated vendor systems as the business expert on procurement teams. At Catalyst Housing, I led procurement projects.

    We checked everything. Functionality. Integration architecture. Security. Implementation timeline. Commercial terms. Vendor stability.

    Everything except reference data standards.

    Not because I didn’t care. Because it seemed obvious they’d be right.

    The Impact Cascade You Didn’t Price In

    When a vendor doesn’t use ISO standards, you’re not just buying “some integration work.” You’re buying consequences that ripple through your entire technology estate.

    Initial impact: Someone builds mapping tables. Vendor codes to ISO. Every vendor update/upgrade, you might need to update the mappings. Every ISO change, you update. Forever.

    Your compliance checking service uses ISO standards. Now you need transformation logic. Test cases. Exception handling. What happens when the mapping fails?

    Every new integration: Another mapping layer. More transformation logic. Compatibility testing with existing mappings. Documentation updates that future teams will need.

    Every new vendor: Three vendor systems, three different definitions of “Micronesia”. System A: “Micronesia”. System B: “FM” (ISO code, wrong mapping). System C: “FSM” (wrong definition). Three mapping tables. Not to ISO—to each other. Testing matrix multiplication. Exception handling for mismatches between systems.

    Test automation becomes a nightmare. Which version of “Micronesia” does each test case need? Mock data must account for every vendor’s definitions. Automated regression suites fail when mapping changes. CI/CD pipelines need transformation validation. Every automated test touching reference data needs vendor-specific variants.

    Operational impact: Support tickets when data doesn’t match. Manual workarounds when automation fails. Training new team members on “why we do it this way”. Knowledge loss when key people leave.

    Compliance and audit: Explaining to regulators why systems define data differently. Control weakness findings. Remediation plans. Annual re-findings when nothing fundamentally changes.

    Strategic constraints: New compliance requirements require integration work before implementation. New analytics platforms need mapping before they’re useful. API integrations to partners delayed by transformation complexity. Technical debt that makes every future decision harder.

    The costs aren’t just money. They’re velocity. Opportunity. Strategic flexibility.

    And they compound forever.

    What Should Change

    This should be an explicit evaluation criterion. Not buried in technical architecture review. Not assumed. Scored.

    Before vendor selection, ask which reference data standards they use. ISO 3166 for countries. ISO 4217 for currencies. ISO 639 for languages. Industry standards like ACORD, SWIFT, HL7.

    Then ask how they maintain them. Automatic feed from authoritative source? Manual updates—how often, who’s responsible? Custom definitions—where’s the documented mapping?

    Ask where they deviate from standards. Which entities use non-standard codes? Why? Legacy system? Customer request? Deliberate choice? What’s the mapping approach? Only in the UI?

    Ask how they handle updates. New country codes from ISO? Currency changes like Bulgaria joining the Euro? Deprecated codes?

    Ask what integration support they provide. Pre-built mappings to standards? Documentation of code definitions? Transformation services?

    If vendors can’t answer clearly, price in the integration cost.

    Why I Didn’t See This Coming

    I studied Economics. You use ISO country codes, ISO currency codes, standardized data definitions. Otherwise analysis is impossible. You can’t compare GDP across countries if everyone defines “country” differently.

    Most systems do use ISO standards. Countries. Currencies. Languages. All the reference data is correct.

    When your academic background and your first decade of professional experience both assume standards compliance, you don’t question it.

    Why would you? These are professional software vendors building enterprise systems. This is foundational.

    Except it’s not foundational to everyone.

    I didn’t think to ask during procurement because I’d never encountered a system that got it wrong.

    The Conversation with Procurement

    If you’re a Business Architect, Enterprise Architect, or data professional reading this, you might be thinking: “Yes! This! How do I get procurement to care?”

    Frame it as risk management, not technical detail.

    “We’re evaluating a vendor that could create ongoing integration costs because they don’t use industry standards for reference data. Should we price this into the commercial negotiation?”

    Make it a standard RFP question.

    “I drafted some questions about reference data standards compliance. Could we add these to the technical architecture section of our RFP template? It’ll help us identify hidden costs earlier.”

    Show the business impact.

    “Three vendor systems define countries differently. When we try to implement compliance checking, we’ll need months of integration work. Could we evaluate reference data standards in future procurements to avoid this?”

    Offer to help.

    “I can review technical architecture sections of vendor responses specifically for reference data compliance. It takes me thirty minutes per vendor and could save us significant integration costs.”

    Don’t make it about governance. Don’t make it about data purity. Make it about cost, risk, and integration complexity.

    Procurement teams manage cost and risk every day. Give them the business case.

    What I’d Do Differently Now

    If I were leading a procurement initiative today, reference data standards would be an explicit evaluation criterion.

    Not a nice-to-have. A scored requirement.

    Compliant with ISO/industry standards: Full marks.

    Custom codes with documented mapping: Partial marks, priced accordingly.

    Custom codes with no mapping plan: Major red flag.

    Because I’ve seen the costs. I’ve watched integration projects stall on data transformation. I’ve heard of test automation grinding to a halt because mock data needs multiple versions for the same country.

    The Authority Rule taught me: You don’t own reference data. But you own the consequences of choosing vendors who pretend they do.

    What You Can Do

    If you’re in procurement: Add reference data standards compliance to your evaluation criteria. Ask vendors to document their approach. Price non-compliance into commercial terms.

    If you’re a Business Architect or Enterprise Architect: Review your vendor landscape. Document which systems use non-standard codes. Quantify the integration impact across the areas that touch: transformation, testing, operations, compliance, strategy. Take this to procurement with a business case.

    If you’re a vendor: Understand that your customers pay forever for your architectural choices. Using ISO standards isn’t gold-plating. It’s reducing your customers’ total cost of ownership.

    If you’re buying systems off the shelf: Before you sign that contract, ask: “Which reference data standards do you use?” If they can’t answer clearly, you’re buying technical debt.


    The Micronesia conversation started this thinking. But it applies to every vendor system that doesn’t use reference data standards as its internal keys.

    The costs are hidden. The consequences are long-term. And the decision is made during procurement, often without anyone realizing it matters.

    I didn’t realize it earlier in my career.

    I realize it now.


    This post is part of The Authority Rule series exploring how authority patterns determine data classification and governance. The framework emerged from real-world experience building enterprise data architecture across multiple industries.

    Read the full series at authorityrule.com/