Time-stamp: <2020-11-16 15h49 UTC>
Plausible authorship deniability
There are many protocols and systems which seek to solve the problem of some agents determining, perhaps with a the certainty afforded by mathematical proof, that someone in particular said something. But upon reflection, this seems to be only half of the picture and it occurred to me just recently to try to imagine what the other half might be. The title of this document reflects an attempt at summarising this.
The problem
In the real world (TM) after we share something with others might wish to somehow withdraw that information. There are many reasons for this, sometimes we simply change our minds, sometimes better facts come to light, and sometimes we just don't want people to know anymore. Of course, in the age of smartphones and mass surveillance nothing can ever really be withdrawn, but there is a rather reasonable compromise on this.
If someone claims that someone else said something, but that person denies it and there's no /proof/ (in a stronger sense to be defined shortly) that they said it, then we have no reason to believe they did. That is to say, the reasonable compromise on withdrawing information is a system wherein refusing to acknowledge that something was said is indistinguishable from not having said it in the first place.
The abstract model
Of course all of this talk about people saying things was just motivation, and to really clarify the problem and identify solutions we should abstract away from tricky concepts like ``people'' and ``words''. Let us re-seat our problem in the context of agents which communicate and author information, and the presence of a protocol for proving authorship of information.
The first term we should clarify in this context is the notion of proof. The state of the art informs the concession that a proof of authorship should be only statistical in nature. As long as one agent can prove to another that with overwhelming probability it is the author of a piece of information, we will consider the other agent to be completely convinced.
With this in mind we may then elaborate our `reasonable compromise' criterion from above into the following set of requirements:
For all agents A, B, C and information Z:
- agent A should be able to ask agent B to prove that information Z originated from it
- if Z did originate from B and should it choose to do so, by following the protocol B should be able to prove to A that Z did originate from it with overwhelming probability
- if Z did not originate from B then by following the protocol A should be able to determine that Z did not originate from B with overwhelming probability, no matter what B chooses to do
- if B proves to A that it authored Z, even while A retains this proof, while following the protocol C should be able to determine with overwhelming probability that A did not author Z. A may plan ahead or engage in as many protocol sessions with B as it desires.
- it should be trivial for A and C to collude to contrive a transcript of a protocol session in which C proves that it authored Z
It's that last part that really ties everything together: without the ability to trivially ``forge'' statements on the part of another agent, the plausible deniability mechanism does not hold.
It turns out general field of study is called Zero Knowledge Proofs
If this sounds like an interesting or useful thing, then you might be interested in Zero Knowledge Proofs (ZKPs). The Wikipedia article
is rather well written and illuminating. It also points to the paper ``An Improved Protocol for Demonstrating Possession of Discrete Logarithms and Some Generalizations'' by Chaum, Evertse, and van de Graaf which has demonstrates a pleasingly understandable approach to implementing the abstract model above. In summary, this problem is solve-able -- but is it usefully solve-able?
Just how plausible?
Of course in such a system we could refuse to prove authorship to people and no-one would be able to tell thereafter with mathematical certainty whether we had in fact authored something in particular, but ... if we had originally proved authorship to many people and they all claimed with mathematical certainty in their own words that we had authored this information ... well, unfortunately persuasiveness of mass collusion is not modelled well in this system. That is to say, unless the air is rife with contradiction claims and unverifiable contradictions which propagate en masse, rescinding authorship of one piece of information in isolation may always be cast as `suspicious' -- and unfortunately that would appear to be the real standard of truth.
Closing questions
- Would it be possible to automate authoring of contradictory information, only the correct one of which is verified? Replace every ``is'' with ``is not''?
- Would it instead be better to automate propogating contradictory or altered versions of other agent's information?
- Would either of these be enough in practice?
- Would you be insterested in using such a system?