← Frameworks

Open Standard · Guide

Protocol Disclosure and Assurance

PDAS asks a simple question: what should a public system publish so an outsider can trust it, integrate with it, operate it, and govern it without relying on private context?

The standard focuses on material facts. It organizes around enduring mechanisms (execution, settlement, interfaces, dependencies, control, economics, operations, assurance, and change history) rather than market narratives that shift with each cycle.

Definition

Materiality

A fact is material if its disclosure would alter a reasonable external reasoner's assessment of the system's fitness for a stated purpose. The definition adapts the securities law standard (TSC Industries v. Northway 1976: information is material if there is a substantial likelihood that a reasonable investor would consider it important) for a broader class of reasoners. In PDAS, the reasonable external reasoner includes integrators evaluating technical compatibility, regulators assessing compliance posture, governance participants evaluating proposals, operators assessing infrastructure requirements, and analysts evaluating structural claims. A fact that would change the assessment of any of these reasoner classes is material. The threshold is contextual rather than absolute: a validator concentration figure is material when the system claims decentralization, and immaterial when the system makes no such claim. The burden of materiality determination rests with the system operator: when in doubt, disclose.

Foundations

Design principles

The standard derives obligations from enduring mechanisms instead of unstable labels such as L2, NFT, metaverse, modular, or AI-native. This principle operationalizes Burke's (1969) dramatistic insight that naming is a form of framing: the labels a system adopts shape how evaluators reason about it, and a standard that accepts those labels inherits the evaluative bias they carry. By grounding obligations in structural primitives, the standard resists the evaluative bias that Perelman and Olbrechts-Tyteca (1969) demonstrate is inherent in audience-adapted argumentation: discourse calibrated to a particular audience inherits that audience's assumptions, and a standard that accepts market-derived labels inherits the market's evaluative frame.

Systems are assessed through execution, settlement, control, dependencies, interfaces, governance, economics, operations, and assurance. The primitive decomposition draws on Bowker and Star's (1999) insight that classification decisions shape what can be known: by choosing primitives that correspond to enduring structural properties rather than market categories, the standard ensures that its assessment categories remain stable across the lifecycle of the systems they evaluate.

Material disclosure has to cover the whole operating surface: API, RPC, gRPC, ABI, node infrastructure, maintainer material, governance, economics, incidents, and machine-readable manifests. Williamson's (1985) transaction cost analysis provides the economic rationale: incomplete disclosure creates information asymmetries that increase the cost of every external transaction with the system, from integration to regulation to governance participation.

Conformance attaches to a concrete deployed release with a date, network scope, control regime, and canonical artifact set. Latour's (1987) concept of immutable mobiles provides the theoretical basis: a disclosure document serves its function only when it can travel across contexts while preserving its referential integrity. Release-binding ensures that the document's claims remain anchored to a specific system state, preventing the temporal unbinding that degrades disclosure value over time.

Material claims point to checkable artifacts, scoped evidence, and public records. Popper's (1963) falsificationist epistemology supplies the foundational criterion: claims that cannot be subjected to potential refutation carry no epistemic weight regardless of how precisely they are stated. Meyer's (1992) design by contract operationalizes the principle for technical systems: every claim should specify conditions under which it can be verified or falsified.

Critical knowledge belongs in canonical public artifacts, with private enablement treated as supplemental. Ostrom's (1990) design principles for commons governance ground this requirement: effective monitoring of shared resources requires that participants can access the information needed to evaluate resource conditions. When critical knowledge is privately held, the information commons collapses into a club good, and the governance mechanisms that depend on informed participation fail. Granovetter's (1985) embeddedness thesis identifies the resulting pathology: system knowledge flows through social networks rather than public infrastructure, creating structural advantage for connected insiders.

Assessment object

Conformance attaches to a release

A brand is too vague. A protocol in the abstract is too vague. The assessed object is the release itself: a defined configuration on defined networks, with named artifacts, named powers, and a known effective date.

(system, networks, canonical artifact set, control regime, effective configuration, release_id, effective_date)

That structure places the disclosure burden on the system. Reviewers should be able to locate material truth in canonical public artifacts without rebuilding it from source code, private channels, or sales conversations.

System map

Primitive-first system model

Categories change quickly. Public systems still have to execute, settle, expose interfaces, route authority, depend on external services, and produce economic outcomes. The primitive model keeps the standard stable while narratives move around it.

System and execution

Release scope, intended use, supported networks, and canonical version.

Where state transitions happen, what logic runs, and under what execution model.

Where correctness anchors, what final means, and under what assumptions it can fail or reverse.

Where transaction and state data lives, who serves it, and what happens when it is unavailable.

Interfaces and operations

ABI, JSON-RPC, REST, gRPC/protobuf, GraphQL, event streams, SDKs, CLIs, and version semantics.

Nodes, validators, sequencers, indexers, provers, backup/restore, monitoring, and failure response.

Maintainers, release authorities, signers, support boundaries, and public responsibility for the system.

Control and governance

Upgrade paths, pause powers, emergency powers, blacklist/allowlist powers, parameter control, and privileged operations.

Oracles, bridges, relayers, sequencers, hosted services, committees, external signers, and legal wrappers.

Who can propose, approve, veto, execute, bypass, or socially coordinate change.

Economics and assurance

Fees, emissions, vesting, endpoint pricing, treasury powers, subsidies, blockspace monetization, and who captures value.

Threat model, audits, testing posture, formal methods, limitations, and evidence of claimed safety properties.

Releases, incidents, migrations, emergency changes, deprecations, and unresolved known issues.

Disclosure surfaces

Required disclosure surface

The disclosure surface spans interfaces, infrastructure, maintainer material, economics, governance, incidents, and machine-readable truth.

What the system claims to do, what terms mean, and which invariants or guarantees are actually asserted.

Execution path, settlement model, dependency topology, trust boundaries, failure modes, and architectural trade-offs.

ABI, JSON-RPC, REST, gRPC/protobuf, GraphQL, event schemas, auth models, compatibility policy, and version semantics.

Full nodes, archive nodes, validators, sequencers, indexers, provers, monitoring, recovery, and self-hosting posture.

Release engineering, signer changes, migrations, operational authority, escalation, and release accountability.

Proposal rights, veto rights, emergency paths, social dependencies, timelocks, and the actual route by which behavior changes.

Fees, endpoint pricing, emissions, vesting, treasury powers, value capture, and friction used as part of the business model.

Material incidents, emergency actions, migrations, unresolved issues, and canonical change history.

Structured manifests that keep human-readable and machine-readable truth in parity.

This surface also includes the business reality of the system. Endpoint pricing, blockspace monetization, support asymmetry, and documentation friction belong here because they shape use in practice.

Review flow

The review process

The review model freezes scope, checks claims against evidence, and issues a release-bound outcome.

Bind the review to a concrete release_id, network scope, control regime, effective configuration, and canonical artifact roots. Without that freeze, the review does not proceed.

Enumerate the full artifact set: overview, normative specification, technical explainers, control registry, dependency register, governance disclosure, economics, operations, incident ledger, and manifests. Missing canon is itself a finding.

Describe the system through enduring primitives and identify the components whose compromise or opacity could materially affect funds, control, finality, governance, or availability.

Convert trust, safety, compatibility, performance, decentralization, and economic claims into scoped claim records. Toulmin's (1958) argument model provides the decomposition structure: each claim is separated into its data (what evidence is offered), warrant (what justificatory principle connects the evidence to the claim), and backing (what grounds support the warrant's authority). Anything left as ambient rhetoric, meaning claims that lack identifiable data or warrants, is downgraded or treated as unsupported.

Map each material claim to direct state, source/release, governance, incident, audit, formal methods, operational, or policy/process evidence. Meyer's (1992) design by contract provides the verification model: each claim specifies conditions under which it can be checked, and the evidence trace demonstrates whether those conditions are satisfied. Evidence outside the assessed release scope is stale by default.

Look for privilege gaps, dependency silence, economic opacity, incident absence, registry drift, and conflicts across canonical artifacts. Winner's (1980) insight that artifacts embody political choices extends to disclosure artifacts: what a system omits from its disclosure is as much a design decision as what it includes, and those omissions shape the epistemic environment for every external reasoner. Under contradiction, the higher-risk interpretation governs.

Determine whether prior audits, tests, or reviews still apply. Carry-forward requires a public analysis of what changed and why older evidence remains relevant.

Produce the finding set, conformance outcome, limitations register, and claim status map. The result attaches to the assessed release and expires after material change.

Assessor rules

Use public canonical artifacts and admissible evidence for every material finding. Popper's (1963) falsification criterion applies: assessor conclusions must be derived from evidence that could, in principle, refute the claim under examination.

Do not treat silence as absence. Negative disclosure must be explicit. Bowker and Star (1999) demonstrate that classification systems encode choices through what they omit as much as through what they include; the same applies to disclosure systems.

Do not upgrade confidence because of brand prestige, certification theatre, TVL, or ecosystem size. Kahneman and Tversky's (1974) heuristics research documents how authority signals systematically bias judgment, and DiMaggio and Powell's (1983) institutional isomorphism explains why prestige signals propagate across organizations even when they lack evidential support.

Where evidence conflicts, preserve the higher-risk interpretation until the contradiction is resolved. This precautionary principle operationalizes Walton's (1996) argumentation theory: in the presence of conflicting evidence, the burden of proof rests with the party asserting the lower-risk interpretation.

If safe use, serious integration, or competent operation depends on gated knowledge, record a hard failure. Ostrom's (1990) monitoring principle requires that participants can access the information needed to evaluate resource conditions; when that information is privately held, the governance commons collapses.

Review outcomes

Non-conformant

Missing hard-gate disclosure, unresolved contradiction, or material reliance on gated knowledge. Braithwaite's (2002) responsive regulation theory informs the treatment: non-conformance is the baseline state from which the standard provides a pathway toward compliance, rather than a punitive designation. The hard failures are bright lines that any system can evaluate against its own disclosure before seeking external assessment.

Core conformant

Release-bound disclosure exists and is canonical, but assurance strength may still be incomplete. Timmermans and Epstein (2010) observe that graduated conformance levels allow standards to accommodate organizations at different stages of capability maturity, reducing the all-or-nothing barrier that prevents adoption of more demanding standards.

Assurance conformant

Claims, evidence, limitations, and freshness rules are working at release scope. This level operationalizes Merton's (1942) organized skepticism: the system has subjected its own claims to the structured scrutiny the assurance layer requires and has produced evidence that withstands external examination.

Reliance-ready

The release has enough current evidence, control transparency, dependency clarity, and incident memory to support formal external review before a stricter reliance profile is published. Grossman and Hart's (1980) theory of disclosure equilibria predicts that systems achieving this level gain a competitive advantage in environments where counterparties, regulators, and integrators select partners based on disclosure quality.

Roadmap

From guide to review instrument

The next layers turn the guide into a public review instrument.

Define the object of assessment, materiality, primitives, anti-gatekeeping doctrine, and burden of proof. North's (1990) institutional economics provides the structural model: constitutions establish the rules that constrain all downstream institutional action, and the constitutional layer's definitions determine the scope and character of everything the standard can require.

State the minimum release-bound disclosure obligations that public systems must satisfy. Black's (2002) responsive regulation theory informs the design: the normative core establishes a baseline that enables graduated enforcement, where the severity of regulatory response can be calibrated to the observed degree of non-compliance.

Turn claims into scoped records, evidence bundles, limitations registers, contradiction rules, and freshness logic. Toulmin's (1958) argument model provides the structural template: every claim requires data (the evidence), a warrant (the justificatory principle connecting evidence to claim), and backing (the grounds supporting the warrant's authority). The assurance layer systematizes this structure for technical systems.

Publish structured manifests so agents, auditors, and allocators can consume the same material facts that humans read. Star and Griesemer's (1989) boundary object concept explains the design intent: the manifest functions as a boundary object that maintains structural coherence while enabling different consumer communities (agents, regulators, developers, analysts) to extract the renderings they need.

The assessor methodology specifies a public review workflow, contradiction handling, and evidence-weighting principles. The fuller assessor manual, calibrated evidence-weighting model, appeals mechanism, and assessor independence criteria are active workstreams. Longino's (1990) social epistemology grounds the methodology's design: knowledge claims withstand collective scrutiny through public forums, uptake of criticism, shared standards, and tempered equality of intellectual authority among participants.

Bridges, rollups, execution systems, asset systems, and node-heavy stacks need tighter specialized requirements. Braithwaite's (2002) responsive regulation pyramid informs the profiling approach: domain-specific risks warrant domain-specific obligations, calibrated to the severity and likelihood of harm in each domain.

Active workstreams

A public review method, contradiction handling, evidence weighting, appeals, revocation, and freshness logic. Longino's (1990) social epistemology informs the design: credible assessment requires public forums, uptake of criticism, shared standards, and tempered equality of intellectual authority.

Bridge, rollup, execution-system, asset, and node-heavy profiles with stricter domain-specific obligations. Braithwaite's (2002) responsive regulation pyramid provides the calibration model: domain-specific risks warrant domain-specific requirements, with enforcement intensity proportional to observed harm.

JSON Schema and reference manifests for the machine layer so the standard becomes ingestible by tooling and agents. Lamport's (2002) formal specification work establishes the standard for what machine-readable precision looks like: every property expressed in a form that can be mechanically checked against an execution trace.

Model artifact families for control registries, dependency registers, incident ledgers, and assurance records. Star and Griesemer's (1989) boundary object theory informs the design: exemplars that maintain enough shared structure to coordinate across assessor, system operator, and consumer communities.

Open RFC process, conflicts policy, interpretation notes, and change management for the standard itself. Ostrom's (1990) design principles for long-enduring commons institutions supply the structural template: clear boundaries, proportional equivalence between benefits and costs, collective-choice arrangements, and monitoring of compliance.

The stricter profile that defines when a release is mature enough for external reliance and formal review. Jasanoff's (2004) co-production framework predicts that the reliance criteria will shape, and be shaped by, the institutional arrangements that form around the standard.

PDAS is a living standard. The clause set, evidence model, and manifest family will evolve as review methodology matures and domain profiles are published. This guide connects to the legibility thesis and the knowledge architecture framework.