Trying Post-Quantum Signatures in a Litecoin Fork
The correctness work behind swapping a coin's signature scheme for a post-quantum one: conformance, determinism, sighash, and fuzzing.
This post covers the correctness work for QLTC, a Litecoin fork that validates post-quantum signatures from its own genesis block. Swapping the signature scheme is a few weeks of work; the harder part is proving the result is correct and deterministic, which is what we'll walk through here.
Background
A coin is a distributed agreement machine. Every node independently checks every transaction and must reach the same verdict as every other node. If two honest nodes disagree about one transaction, the network splits into two incompatible chains — a chain split (covered in the next section).
A short vocabulary pass, since the rest depends on it:
- Public-key signature — the math that proves the owner of a coin authorized a spend. Bitcoin and Litecoin use ECDSA, built on elliptic curves.
- The quantum problem — a large quantum computer running Shor's algorithm can recover an ECDSA private key from its public key, which makes today's signatures forgeable.
- Post-quantum signature — a scheme believed to resist both classical and quantum attackers. QLTC uses ML-DSA-65 (Dilithium), standardized by NIST as FIPS 204.
- liboqs — the library that implements ML-DSA. We vendor a pinned copy into the tree rather than depend on whatever is installed.
We don't reinvent the cryptography — that's where subtle mistakes are easy to make. The work is making vetted crypto behave deterministically inside a consensus system. We split it into tiers; this is the first, the correctness tier, and the rest of the project waits on it.
What a chain split is
A chain split is when nodes stop agreeing on one shared history and the network continues as two divergent chains. There are two kinds, and only one is a problem.
The harmless kind is temporary: two miners find a block at nearly the same height, the network briefly sees both, then converges on the heavier chain and drops the other. It resolves within a block or two on its own.
The kind we care about comes from a rule disagreement. If some nodes accept a block that others reject, each group keeps extending the chain it believes in, and the two never reconcile — each side treats the other's blocks as invalid indefinitely. The same coin can then be spent separately on both sides, and participants disagree about which transactions happened. This is what a consensus bug produces, and it can't be patched after the fact because both histories already exist.
A deliberate rule change is different again: a hard fork is a planned, coordinated upgrade. If everyone adopts it there's no lasting split; if a group declines, two chains exist on purpose — which is how Litecoin itself descends from Bitcoin's lineage. Changing QLTC's signature scheme later would be that planned kind. A determinism or conformance defect in the verifier would be the accidental kind, unplanned and unrecoverable, which is why the work below comes first.
Conformance testing
The first question is whether our ML-DSA is correct ML-DSA — not "does it sign and verify," but "does it produce the exact bytes the standard requires." If our verifier accepts something a strictly conforming verifier rejects, the two disagree and the chain splits.
The tool is a KAT (known-answer test): NIST publishes fixed inputs and the exact outputs a correct implementation must produce. Feed the inputs, hash the outputs, compare against a pinned value.
We run two gates. One checks the official vectors against the exact library our consensus code links:
contrib/qltc_pqc_kat.shML-DSA-65 known-answer digest
expected (pinned): 7cb96242...ecebf072
computed (ours): 7cb96242...ecebf072
-> PASSThe second pins the output of our own wrapper around that library, so the glue code we wrote is proven not to perturb a single byte. Two gates because they answer different questions: is the primitive correct, and did our plumbing change it.
Cross-implementation determinism
Crypto libraries ship more than one implementation of the same algorithm. liboqs has a portable C version and an AVX2 version (AVX2 is a set of fast vector instructions on modern Intel/AMD CPUs), and picks one at runtime based on the host.
For most software that's a harmless speed optimization. For consensus it's a risk: if the two paths ever differ by one bit, nodes on different hardware split the chain. So we prove they don't — a gate builds the library both ways, feeds identical inputs, and asserts byte-identical output:
contrib/qltc_pqc_diff.sh # portable build vs AVX2 build, must matchA startup self-test
The determinism gate runs in CI. We also want a guarantee on each operator's actual machine, so the node runs one fixed known-answer check at boot, before it touches the network:
// Refuse to start if this binary on this CPU disagrees with the pinned answer.
if (!pqc::SelfTest(err)) {
return InitError(err); // bad build / exotic CPU / miscompile
}A node that can't agree with the network shouldn't join it. A loud refusal at boot is the behavior we want here.
What the signature actually signs
A signature doesn't sign "the transaction." It signs a sighash — a hash computed in a precisely specified way over a specific subset of the transaction. If the signer and verifier compute it even slightly differently, every signature looks invalid although the cryptography is fine. The defect is in what the math was applied to, not the math.
The proof-of-concept used the older SegWit sighash (BIP-143). We migrated it to BIP-341, the message construction Taproot uses: length-tagged and harder to get subtly wrong. The important design choice is that the signer and verifier call one shared function, so they can't drift apart:
// One definition, used by both the wallet (signing) and consensus (verifying).
uint256 PQCSignatureHash(const CTransaction& tx, unsigned int nIn,
const std::vector<CTxOut>& spent_outputs);This migration is also where the correctness work paid for itself. It surfaced a real consensus bug: a precomputation step prepared the BIP-341 data only for the old address type, not ours. In tests it tripped an assertion; in a running node validating a real post-quantum spend it would have crashed the node. We fixed it, including the underlying asymmetry — the signer computes the hash before the signature exists, so it can't rely on detection logic the verifier can. The single-shared-function rule is what made the fix provably symmetric.
Fuzzing the verifier
Everything above tests the expected path and a few deliberate failures. Attackers send malformed input by the megabyte. Fuzzing is the practice of firing large volumes of random and semi-random input at a function to find a crash, a memory bug, or a forged signature that verifies.
We added two layers: a target for the standard fuzzing engines for long unattended runs, and a deterministic seeded stress suite that runs in normal CI over thousands of randomized keys, signatures, witness layouts, and bit-flipped transactions. The invariants are blunt:
// Arbitrary attacker input:
assert(!verify(random_pubkey, random_msg, random_sig)); // a forgery must not pass
// A genuine signature still verifies; a corrupted tx is rejected cleanly and
// the process stays up.The fuzzers run clean across the attacker-controlled surface.
Threat model and review scope
The last item in this tier is a document, not code: a threat model that names every place the new cryptography touches consensus, ranks failure modes by blast radius (chain split first, since it's the only one that can take the whole network down), and lists open questions for an external reviewer. It explicitly records the project's weak spots, including the bug above, as an illustrative example of the risk class. A security review is only as good as the map it's handed.
What I'd recommend
If we're adding cryptography to a consensus system, I'd gate two things separately: the primitive (against published conformance vectors) and our own wrapper (against a pinned output of our exact code). They fail for different reasons and conflating them hides bugs. I'd also add a boot-time self-test — it converts a silent, network-wide divergence into a local, obvious refusal to start, which is a much cheaper failure to deal with.
Things to keep in mind
This tier proves correctness; it doesn't make the coin usable. The determinism gate depends on building liboqs two ways, which adds CI time. The self-test adds a few milliseconds to startup. And conformance is only ever "conformant to the version we pinned" — a library bump has to be treated as a consensus change and re-gated, not a routine dependency update.
Remaining work after the correctness pass
With the correctness tier green, the question "is this possible" is answered, and the rest is engineering and product work rather than open research:
- Anti-abuse — post-quantum signatures are roughly 70x larger than ECDSA, so we add a signature cache and measure worst-case block-validation cost.
- Relay and chain identity — making the new output type relay-standard, and giving the coin its own genesis, network identity, and address format.
- Wallet integration — so the wallet counts post-quantum coins in its balance and spends them through an ordinary send.
- Crypto-agility — a second scheme (Falcon, or a classical-plus-post-quantum hybrid) drops in behind the same interface without touching consensus.
- Reproducible builds — so anyone can compile the source and verify the released binary.
None of those can split the chain. The correctness tier was the part that could, which is why it came first.