Data Availability
What is Data Availability?
Data Availability ensures that data referenced in Palliora actually exists, remains accessible, and has not been tampered with.
In Palliora, contracts, computations, and verification all depend on data being reliably retrievable. Data Availability guarantees that once data is committed to the network, it cannot be silently removed, censored, or altered.
It is a foundational layer that makes off-chain computation enforceable.
The problem Data Availability solves
In decentralised systems, it is not enough to agree on data. Participants must also be confident that the data can be retrieved later by anyone authorised to access it.
As systems scale, this becomes difficult.
Large volumes of data are expensive to store and distribute. Requiring every node to download and verify all data does not scale. At the same time, relying on a small number of storage providers introduces censorship risks and trust assumptions.
Without strong Data Availability guarantees:
Contracts become fragile
Computation can stall
Verification becomes impossible
Malicious actors can withhold data without detection
Data Availability addresses scalability, cost, and censorship resistance simultaneously.
How Data Availability works
Palliora uses a dedicated Data Availability layer designed to scale while remaining verifiable.
When data is submitted:
Encryption (if required) Data is encrypted if confidentiality is needed.
Data sharding The data is split into smaller chunks, known as shards.
Erasure coding Reed–Solomon erasure coding is applied to introduce redundancy. This allows the original data to be reconstructed even if some shards are missing.
KZG commitments Each shard is committed to using a KZG polynomial commitment scheme. These commitments are small, efficient, and immutable once recorded.
Rather than storing all raw data directly on-chain, the network stores cryptographic commitments to the data.
Sampling and verification
Nodes do not need to download the full dataset.
Instead, they randomly sample a subset of commitments and request the corresponding shards. If the sampled shards are available and match their commitments, this provides strong probabilistic evidence that the entire dataset is available.
If a node fails to provide the requested data, this can be proven. A fraud-proof can be submitted, and the responsible node may be penalised.
This creates economic incentives for honest storage.
Why KZG commitments are used
KZG commitments provide several important properties:
Succinct proofs — proof sizes remain small
Non-interactive verification — no multi-round communication required
Strong cryptographic binding — committed data cannot be altered
This allows Palliora to scale without sacrificing integrity.
Trust and guarantees
This design provides clear guarantees:
Data cannot be silently censored or removed
Data cannot be altered without detection
Nodes can challenge availability at any time
The system scales without requiring full data downloads
Importantly, Data Availability does not require trust in a single storage provider. It relies on cryptography and economic incentives rather than reputation or permission.
Confidentiality remains separate. Data may be encrypted before entering the Data Availability layer, so availability does not imply visibility.
What Data Availability is not
Data Availability does not decide who is allowed to access data or when.
It does not execute computation. It does not verify computation results. It does not enforce contract logic.
Its responsibility is deliberately narrow: to ensure that committed data exists, remains intact, and can be retrieved when required.
How Data Availability fits into Palliora
Data Availability underpins Palliora’s execution model.
Computable contracts reference data without embedding it directly. Guardians rely on the Data Availability layer to retrieve inputs and store outputs. Verifiers use it to check integrity. Calculators depend on it to access data reliably.
Without Data Availability, Palliora contracts would be promises. With it, they become enforceable.
A simple example
Imagine a dataset of 100 GB.
Instead of storing it as a single object, Palliora splits it into 1,000 smaller shards, adds redundancy through erasure coding, and commits to each shard using compact KZG commitments.
Nodes randomly sample a subset of shards, for example, 100, and request them. If the sampled shards are available and valid, it is overwhelmingly likely that the entire 100 GB dataset is available, without requiring anyone to download all of it.
This is how Palliora achieves scalability with strong guarantees.
Last updated