A method to mitigate the damage of insider leaks

Wikileaks is a notorious/famed website that makes it a practice to disclose alleged documents from governments and corporations. They are presently in the news, of course, for the release of about 250,000 classified documents, most concerning the internal workings of the United States Department of State.

Today I want to discuss a method that could be used to deflect attacks like those of Wikileaks, if not defeat them.

No it is not a better security system, a better firewall, a better method of watching for intrusions, or even a better method for guarding against insider attacks. I wish I had really good new ideas on how to enhance security of systems——we need more of them. But they will never stop Wikileaks (WL) from getting classified material. There will always be ways around the best security systems; there will always be “holes” in any system; and there will always be the insider threat.

A disclaimer. I have mixed feelings about today’s discussion. I am well aware that Wikileaks has won many awards for their journalism. Yet leaks of documents can damage the security of legitimate governments, who are trying to make their countries safe. However, leaks can also expose lies and mistakes that even legitimate governments make, and are almost always unwilling to admit.

I do believe that privacy and the protection of information of the government is of great importance. Thus, I decided to risk discussing this timely issue:

How do we stop leaks from causing damage?

I hope those on both sides of this issue can discuss the ideas here for their own merit.

I have consulted with a number of colleagues about this post, especially because of the nature of the problem. I want to thank Rich DeMillo for many interesting additions, and Patrick Traynor with whom I had a long conversation about these ideas weeks ago. As always I thank Subrahmanyam Kalyanasundaram and Ken Regan. Any errors, any mistakes, are as always mine.

How To Stop Leaks?

I have no idea. I think leaks are like gravity, it is impossible to turn them off. No matter how terrific your security is, there will continue to be leaks of all kinds. What I do think is there is a mitigation strategy that can make leaks less damaging.

How to Mitigate Leaks?

Suppose that Alice runs an agency that handles very sensitive information. The thousands of people in her agency have access to millions of documents that would be potentially interesting to WL. Alice does nothing special until a leak occurs—although see a later section for a more “on-line” approach.

Suppose that WL gets documents ${D = D_1,\dots,D_m}$ from some source inside Alice’s agency. They publish them on their web site, and then Alice is faced with a major problem.

Today she can do nothing to stop the leak. She can try to find the insider who made the leak, and use the legal system to deal with them. But that does nothing to mitigate the damage that is already done. There is an American idiom that says:

close the barn door after the horse has bolted.

This means: “Trying to take action when it is too late.” Today this is where Alice is—the horse is gone—the documents have been leaked. Closing the source of the leak does not help get the horse back.

However, she can do something to stop the leak. Here is the mitigation strategy: She runs a special program over the documents ${D}$ and creates new ones ${F = F_1,\dots,F_n}$. These new documents are similar to the ones leaked, but they are different in many ways. Alice then “leaks” her fake documents ${F}$.

What is the point of this? Now the media and the public are confused. Is ${D}$ right or ${F}$? If ${F}$ is cleverly constructed, it should contain some “bad” information, but will differ from ${D}$ in important ways. For example, if ${D}$ has a passage that says:

Let’s pay X ten million dollars to do Y.

The documents ${F}$ could have a passage:

Let’s not pay X ten million dollars to do Y.

If Alice is smart she may even make some of the passages in ${F}$ worse than those in ${D}$. Thus she could have a passage:

Let’s pay X fifty million dollars to do Y.

The critical point is ${D}$ and ${F}$ will look alike, but will differ in many places.

The existence of ${F}$ will increase everyone’s uncertainty. What are the correct facts, what is true, and what is not? This increase in uncertainty will muffle the effectiveness of the released documents ${D}$. Consider the dilemma facing a media outlet: would they feel comfortable in stating something if there is great uncertainty? Not clear.

Alice can do more to increase the uncertainty. She could, and probably should, release multiple versions of ${F}$. These versions would flood the media system. It could take a long time for them to unravel, if ever, which are “real” and which are not. She can even release information that is more damaging than the real documents. She can denounce all of them as fake, or claim some of them as fake.

Hiding Policies, Not Options

Ken Regan noted that the above simple example still leaves it certain that option Y was “on the table”—and that might be the shock of the leak regardless of whether and how much bribery was involved. A way to further mitigate this shock is to release publications that mention Y alongside other options Z,W,…, and then obfuscate with documents that rotate which one was mentioned. Moreover, one can already have published non-secret documents that chatter about all these options in an abstract manner—proceedings of public policy conferences, for example. This gets everything “out there” so that no single mention of Y raises the fear of the unfamiliar. Thus Alice’s task becomes the easier one of not having to hide Y itself, but only that Y was the option favored by her agency.

How To Do This?

My colleagues have made several interesting suggestions.

${\bullet }$ Rich has several clever ideas on how to create the alternative documents. One is the use of automatic language translators: take a document and translate it to another language and back. Since translators are not perfect, this will change the document. I used this method previously here. There are theory ideas based on methods to protect database information that could perhaps be used—especially for numerical data.

${\bullet }$ Rich and Patrick both thought that a more “on-line” system approach might be better. The advantage of this is that the alternative documents could be created even by the authors of the originals. Or they could be created automatically, but would be available for immediate release when needed. Patrick even suggested, in some situations, there could be a stream of constant “leaks” that would be more proactive in protecting Alice’s agency.

Open Problems

Is there a way to make this work? Can we implement an automatic system that would make the “fake” documents? These would have to seem to be real, or they would not be of any use.

I would like to end with why I was thinking about this question in the first place. I believe that a fundamental problem with many social networks, for example Facebook, is that people can post information that they may later regret sharing. The folk “theorem” is once on the web there is no way to erase the information—it is there forever. I thought that there might be a way to use the addition of information to solve that problem. What do you think?

December 2, 2010 7:58 pm

This debate is no different from “proprietary” vs. “copylefting”. And while “DRM” research can tilt the scale in “proprietary” favor, but is that really the right thing to do?

Why is there really any need for governments to hide anything? If anything, we need greater transparency!

Let me now repeat the oft-repeated … in a streetfight involving guns, I want the better gun and let me hide the technology so that I can continue to have the better gun.

December 2, 2010 8:51 pm

As a mathematician, I loathe the idea of mathematics being twisted to evil purposes like this. As the above commenter said, we need MORE transparency, not less.

December 2, 2010 9:16 pm

I loathe the person who leaks such sensitive information.

December 3, 2010 1:39 am

I agree. States exist for social structure and protection. Some information, which would be wonderful if all citizens could be privy of, must be hidden because there is no way to make a government transparent to only its citizens and allies without including its enemies.

Government transparency is impractical since enemies of said government would have access to that information, which in turn could be harmful to the government.

December 3, 2010 3:16 am

Could you please explain in clear terms and with enough details the following:

1. Why do you think this is “an evil purpose”?
2. Why do we need more transparency and not less? And who is precisely “we”?
3. Do you also reject Cryptography on the ground that it is “mathematics twisted to support evil purposes” (because it obviously harms transparency)?

December 3, 2010 9:45 pm

It is not clear that cryptography can *only* be used to harm transparency.

For example, Wikileaks has distributed the encrypted “insurance” file, presumably to be decrypted in an emergency. Thus, he cleverly uses cryptography to ensure transparency.

3. December 2, 2010 8:53 pm

This idea could be generalized to any information that someone doesn’t like — not just leaks of classified information.

The idea could have the unintended side-effect of harming the news media as people would start being skeptical about everything that is being reported.

4. December 2, 2010 8:56 pm

Uhhh … gosh … to see the disadvantages of this “dilute-the-truth-strategy”, in any functioning democracy, one need only reflect on the lessons-learned from the Vietnam era, as summarized (for example) Neil Sheehan’s A Bright Shining Lie: John Paul Vann and America in Vietnam:

Gen. Brute Krulak to Gen Lew Walt: “You cannot win militarily. You have to win totally, or you are not winning at all.” (p. 637)

Also relevant is Gen. H. R. McMaster’s Dereliction of Duty: Lyndon Johnson, Robert McNamara, the Joint Chiefs of Staff, and the Lies that Led to Vietnam, quoting Hans Morgenthau:

“To say that the most momentous issues a nation must face cannot be openly and critically discussed is really tantamount to saying that democratic debate and decision do not apply the the questions of life and death. … Not only is this position at odds with the principles of democracy, but it removes a very important corrective for governmental misjudgement.” (p. 243)

These lessons from the Vietnam were pretty thoroughly assimilated—both of the above texts feature prominently in the USMC Commandant’s recommended reading list—-and for this reason the biggest news of the Wikileaks is that … there ain’t much news in the Wikileaks.

Now, the coming corporate Wikileaks promise to be much juicier! 🙂

5. December 2, 2010 9:00 pm

Rivest has a communication system like this: chaffing and winnowing. you send many variations of your message, “attack at dawn” , “attack at noon”. only the correct one has a valid signature, and nly the intended recipient can check.

Another related crypto idea is to have a cipheetext that decrypts to the correct message with one key and to an innocuous mesage with a “duress” key.

Probably you could give out different keys to staffers and identify the leaker as well. (see Clancy’s canary as well)

December 2, 2010 11:10 pm

Hehe, Mr. Lipton, the news world is collapsing. People are going crazy. Conspiracy theorists are disappointed that the leaks do not confirm their stuff. Politians are asking for executions. And somehow, this theory problem is still what attracts you the most about this issue. I love this blog.

• December 3, 2010 1:47 pm

I love this reply. Love it!

December 3, 2010 8:29 pm

IDEM.

December 2, 2010 11:23 pm

We have no way of knowing that a few lies weren’t slipped into the current trove of leaked diplomatic cables. It’s perfect cover, because the government has basically admitted that the cables are real, without authenticating the whole pile… which points out a problem with the obfuscatory leaks defense. Without automation you’ll never get your obfuscatory leaks out fast enough to do any good, because you can’t read and alter the leaked documents fast enough without enlisting a huge number of people, and of course giving the game away. Check back after the Singularity, there ought to be strong enough AI for it, then. 🙂

• December 3, 2010 3:32 am

Check back after the Singularity,

If there happen to be a Singularity which is truly what it is purported to be, NO rules/ideas from the previous era will still be valid (it is the very definition, isn’t it?).
Therefore all arguments like, “after the Singularity this…” or “after the Singularity that…” are moot.
Most especially the paranoid scares from the agitated SIAI people.

December 2, 2010 11:39 pm

I might have missed something (late hours), but, at least in the “react after leak” version, the “wikileaks side” can easily react by ignoring (or stamping as false) any following fake leaks. That is of course if they agree on the fact that the “state side” is utilizing this method.

Mass media could be manipulated (and perhaps that is enough) but those informed directly by the wikileaks site would not be swayed.

December 3, 2010 2:16 pm

I had the exact same thought. Nice point!

December 2, 2010 11:48 pm

At least in the case of WikiLeaks, it seems unclear to me that a strategy like this would work. First, there is the issue that it is known that D is leaked *by WikiLeaks*; Alice could try to also leak F through WikiLeaks but since they received the similar document D first they may not publish it. As you mentioned Alice might try to pre-emptively leak F but it seems unclear that a legitimate-looking F would really contain *no* sensitive information and thus Alice would be reluctant to leak it until she really has to.

Another issue is that D might not be a document but say a video, like Collateral Murder. I don’t know how the strategy would work in this case. What are you going to do, stage a video of soldiers not killing people?

December 3, 2010 12:25 am

Disinformation tactics like the one proposed are already being used widely as means to various ends: copyright owners flood e.g. bit torrent networks themselves with fake, low-quality or simply bogus versions of their works to making it hard and tiresome to find and download good quality versions of the pirated work. Much more worrysome I find the application of this tactics in politics, where it is attributed to science: for each scientific study supporting the views of one party, miraculously there is another “scientific” study brought forward supporting the views of the opposite party, ultimately leading to confusion of the public and worse, discrediting of science to the public. Just check for the climate change debate.

December 3, 2010 3:57 am

Interestingly, I talked about this very idea just yesterday evening. There is a huge caveat with it:

Lying is difficult.

While it is easy to generate different but semantically equivalent documents, if you change the document with respect to content, you are facing the liar problem: you have to arrange your lies to be consistent, or otherwise you will be caught easily. This becomes even worse because it is not sufficient to be consistent throughout a single document.

Interestingly, you do not have to be fully consistent, it is sufficient that the inconsistencies are hard to verify. Our favorite problem SAT is greeting from (not so) afar.

I personally have very mixed feelings about wikileaks. I strongly feel that our modern democracies can benefit greatly from rigorous transparency. On the other hand, neither intention nor prudence of the leaks is even considered – by design – on wikileaks, and information can threaten lives. Leaking must not cross the line where individual people are endangered.

December 3, 2010 4:28 am

We also should think about the different decay rates for the benefits/damages of leaks.

There should be a grace period for classified information, because the damage of leaks isn’t stationary, it decays faster than the benefits of retrospection. — Those cables from 1966 really aren’t harming anyone, but they could be very useful in understanding the Cold War. Information about why we entered Iraq could be very useful ~10 years later, now that we’ve had time to reflect.

December 3, 2010 11:10 am

This blog is international. So you should explain who is “we” when you say “why we entered Iraq”.

December 3, 2010 4:53 am

Note that governments currently leak false information to the media all the time. For this they can use the legitimate media, like the NYT or the Washington Post. Like your proposal, this is used to distract from government crimes and abuses of power, often by suggesting that there’s an imminent terrorist threat, or that the real story is about Joseph Wilson and not what he found in Niger.

Or it’s used to float trial balloons, like the Obama administration’s leaks about its plans for extra-judicial assassinations of US citizens. That way it has the plausible deniability that you mention, but they get to make the concept sound less outrageous.

December 3, 2010 5:14 am

Sorry if my comment is going to be naive. If the false documents are released using WL, then WL has power to hold back the false documents. If the new false documents are leaked using different channels, it is sufficient to WL to sign its documents with its public key (distributed with a strong network of trust, which WL has). This signing strategy works if false documents are leaked the day after.

In a more general setting the operative should be able to distinguish false and true documents to do their jobs, and they are the most probable sources of the leaks. Your strategy would work if the sources of the leaks are usually casual handler of documents: these people may not be able to distinguish good docs from bad ones. And I guess WL is not in the position to check the level of expertise of the sources to a very fine degree.

December 3, 2010 6:37 am

+1 to this.

In case of reactive actions there always be a chunk of “original” documents and all the others.

The described method would work in a non-digital environment where precision of time is poorer and a lot of historical information is lost.

To make this approach work it should be 1) proactive and 2) insiders should be unable to distinguish true documents from false. Which basically means multiplicating organization several times.

December 3, 2010 5:54 am

All,

My goal with this post was not to argue for or against Wikileaks, but to point out that there may be ways to protect information that has been broken into. The methods here could apply to your bank records, your health records, your web activity, and so on. Many of us would like to keep that information safe. No?

The whole body of security and cryptography is to stop people from reading information. I assume that no one is suggesting we stop doing research into that, or that we stop funding it.

I do agree that some secrets are “bad:” but the trouble is which are good and which are bad is subjective.

So in summary my point was that there could be ways to protect information that has been broken into. I think it raises many hard technical questions of how to do it. Also, as with all technology, it raises questions of its use and potential misuse.

• December 3, 2010 7:57 am

Lipton says: The methods here could apply to your bank records, your health records, your web activity, and so on.

Now *that* remark is to-the-point. But these issues are associated to four key questions that are mighty tough to answer: “Does democracy work? Do unregulated markets work? Does nation-building work? Does the Enlightenment work?”

Imagine a world in which all ciphers are perfectly unbreakable, all communications are perfectly untappable, and all security screens are perfectly reliable … even in that mathematically secure world, answering those four key questions would not be significantly easier, than in our present world.

A hilarious and illuminating exposition of these beyond-informatical questions are the scenes in the movie The Incredibles between the incognito Mr. Incredible and his boss at the InsuriCare Corporation, Gilbert Hupf.

• December 3, 2010 8:13 am

To follow on Dick’s comment, there’s a principle that broaching an idea does not necessarily mean you support it. Here I’m talking about this post’s idea, but this also underlies my paragraph above about ideas Y,Z,W… in general.

What I feel as the greatest threat to an open society is the way ideas get demagogued and demonized, where the raising of the idea is taken with mistrust of intention. Witness the health-care debate with “death panels” etc.

My 3 cents on the scheme itself is that it could be effective only with the “online” mode—whereby fake versions are always created alongside the real. My concern is this would greatly increase the complexity of Alice managing her organization, with extra energy and security structures needed to maintain which versions are true and who has access. Or is this complexity already present today?

December 4, 2010 3:57 pm

Yes, I agree. This is simply poisoning the well. What happens later when there’s nothing fit to drink?

The problem with transparency is that it almost always seems to go in one direction.

This situation plays no part in the initial discussion, which is why, given the current circumstances, it invites these kinds of posts.

December 3, 2010 6:16 am

I think this proposal is actually beneficial
to Wikileak’s cause. After reading Assange’s
paper on conspiracies[0], it seems his goal is
simply to limit the flow of information through
a conspiratorial network.

started issuing many versions of a document,
the total information flow through the
network of conspirators would be diminished
because every node in the network would have
to be informed of the “one-true” version of
the document in question.

Communication through the network becomes
difficult because _everyone_ will have to
be skeptical of the their source.

December 3, 2010 7:11 am

I’m not sure this would work, at least, it can’t be use to mitigate losses caused by wikileaks in particular, for the following reason.
Wikileaks has already released two sets of documents which everyone knows are completely genuine. So, when Wikileaks brings out another set of documents, people are more willing to trust what Wikileaks says than what anyone else says. Especially as Wikileaks has (as far as I know) no financial interest in releasing these documents.

December 3, 2010 8:27 am

Take a look at http://www.schneier.com/blog/archives/2010/06/wikileaks.html for a view from Bruce Schneier on Wikileaks. I should have linked to it before.

19. December 3, 2010 9:02 am

Creating one fake that’s convincing is easy. Creating 250K of them, when one overt machine screw-up will discredit the whole batch, is beyond current natural language processing technology. Indeed, part of why the media gets interested in *big* batches of documents is exactly that the work to convincingly fake such a large set is so difficult.

That said, this is a long-standing and useful disinformation strategy for small sets of documents, and I agree there’s huge potential for personal information here. Information consumers are going to have to worry increasingly about the pedigree, not just the face plausibility, of information.

December 3, 2010 9:33 am

i was just talking last night with friends about something that i think is interesting: the really smart people in the world, and the best in their craft, take big risks occasionally, and are willing to fail badly occasionally. when they do well, they can sometimes do really well, but they’re also able to fail spectacularly. it’s a part of the natural variance inherent in any interesting collection of ideas by a creative person.

sadly, “How To Stop Wikileaks?” is a member of the ‘spectacular fail’ variety.

i’m a big fan of the blog generally, but this topic just falls down and skins its knee.

i) you’re basically asking if there’s a theory way to subvert the relevant section of the 1st amendment. turns out, no.

ii) as massimo lauria pointed out, public keys can be used in their intended fashion and can authenticate real leaks.

iii) once data is made available, it can’t be made unavailable in any modern democracy. to intend to obfuscate it is just plain wrongheaded.

i’m sticking with points i) and iii) although they’re the most opinionated and least related to the blog as a whole, having no theory content.

by the way, including the following line:

“I hope those on both sides of this issue can discuss the ideas here for their own merit.”

does not mitigate the bad intentions inherent in the idea.

s.

• December 3, 2010 10:20 am

OK, I guess you’ve cooked our “Christmas Goose” in place of “Thanksgiving Turkey” 🙂

Nice first paragraph. As for “bad intentions”, I take it you mean the idea itself, not our motive for raising it—otherwise see my comment above.

December 3, 2010 10:26 am

🙂

no, i can’t fault the motive for raising a theoretical question. few things could be as pure.

s.

• December 3, 2010 10:53 am

Ken, this particular topic (“How To Stop Wikileaks?”) raised questions that don’t have clean mathematical answers, and then (implicitly) assumed that these questions do have clean mathematical answers.

Historically, this rhetorical maneuver has a dubious record. As was said in by the strategic computer Joshua in the movie War Games (in reference to the game “Global Thermonuclear War“):

Joshua  A strange game. The only winning move is not to play. How about a nice game of chess?

So far, a solid majority of posters (including me) agree with Joshua’s analysis! 🙂

December 3, 2010 11:06 am

This is a great post, Ken, as always.
Don’t mind the Wikileaks apologists here. They are not convincing anyway.

• December 3, 2010 11:31 am

Another mathematician: Don’t mind the Wikileaks apologists here. They are not convincing anyway.

Surely that is a Bohr-style “Great Truth” … meaning that its opposite *also* is a Great Truth. That opposite truth was uttered by Samson Kutateladze in his obituary essay Arnold is Gone:

It is not shameful to be a mathematician but it is shameful to be only a mathematician.

So according to Kutateladze (and perhaps Arnold) should not give in to the temptation to be “only mathematicians” in discussing the tough issues raised by Wikileaks,

December 3, 2010 12:55 pm

Steve,

Thanks. I guess we failed. Oh well. Next time we will avoid this issue, for sure. Let’s talk about ….

December 3, 2010 10:04 am

This is indeed very interesting idea, but I don’t think it will work.
First of all, it requires Alice to act before the leak has actually been exposed. In another case, real data will already be distinguished.
Second – it only works for rather short period of time. In a long run real data will eventually be found. What’s even worse – key points, where original data differed from fake would serve as highlights of the most sensible info, thus increasing, not decreasing damage from the leak. So it will only be used under second-term presidents.

December 3, 2010 11:05 am

Doing wrong and calling it secret, classified doc. confidential wont fly in internet era.

December 3, 2010 1:39 pm

Indeed. But fortunately no wrong-doing was exposed in the last week by Wikileaks, so this is not relevant here.

December 3, 2010 12:50 pm

A simple question:

What if all the health records were stolen from your local hospital. Would you say: great it’s fine? Information needs to be free?

Or would you like some way to mitigate the disaster that had happen. If the hospital had some method, this or another, to help make the records get “erased” would that not be good?

That is what we are thinking about.

• December 3, 2010 1:09 pm

Dick, it commonly happens that a simple, elegant mathematical answer can be attained only by posing questions that are simple and mathematically elegant … which in turn are defined by postulates that are simple and elegant.

But no health-care system in the world—in particular the present health-care system of the USA—is defined by postulates that are simple and mathematically elegant.

Quite the reverse … opportunities to “game” the present system are so ubiquitous—and so financially rewarding—that almost any innovation that is so simple as to be mathematically elegant, is heuristically likely to be exploited for purposes that are … well … morally objectionable, or even outright criminal.

Here I am speaking as a medical school faculty member who has, in recent years, attended numerous Grand Rounds at which the topic of discussion was (regrettably) not the practice of medicine, but rather, administrative strategies for securing an assured supply of modern-day America’s most precious (and increasingly scarce) medical resource: patients possessed of good health-care insurance.

• December 3, 2010 7:50 pm

But then this proposal is even worse; to distribute misinformation regarding Health Records would certainly be more harmful than good.

As others have mentioned, I don’t quite understand the proposal because signing documents so trivially (in theory) avoids the problem. Of course, not everyone will confirm the signature or care, but it’s definitely the solution.

If your question is more “How to remove information once it is released?” Then yes, “clouding” the matter seems like one approach (aside from simply shutting down every single person who discusses it); and it’s an arguably interesting thing to think about.

24. December 3, 2010 12:57 pm

In principle, the 1st amendment allows anything to be published, but reserves appropriate punishment in practice. The idea was mainly inspired to prevent Star Chamber trials, but the downstream effects of a “free” press are well-appraised by Thomas Jefferson as both truly essential and truly unbelievable.

In any case, a more formally regulated channel might contribute insight. With research involving human subjects, Al Gore’s legacy has been a new sense of privacy enforceable by CFR. People have lost their jobs by looking at medical records of celebs, for example, although I don’t think any institution has been fined for weak systems of security. Protecting data, or de-identifying it, occurs over a continuum where one end is secured by not exchanging data to the other end where there is no privacy.

“The sole natural object of mathematical thought is the whole number. It is the external world that has imposed the continuous upon us…”

Poincare’ at the first Int’l Congress of Mathematics, Zurich, 1897

December 3, 2010 2:30 pm

The difference between this and medical records at a hospital is that the Wikileaks documents have (a) exposed large amounts of criminal activity, and (b) have concerned the official actions of people working for democratically elected governments. This is relevant because in many cases, the documents were improperly classified, among other reasons, to conceal criminal activity. The issue is not about information wanting to be free in general but about the more specific point of whether the public is able to know what its elected representatives are doing.

I think your commenters are upset with your post because you are trying to come up with ways for democracies to better conceal their illegal (or at least unpopular) activities from their citizens, who in principle they should be accountable to.

December 6, 2010 8:24 am

Exactly my thoughts. [A non-US computer scientist compelled to delurk.]

26. December 3, 2010 2:32 pm

As others have pointed out, this strategy is already used by intelligence agencies, hence the term “FUD” (fear, uncertainty and doubt). Even better, it is rumored that Apple crafts fake leaks with different signatures and distributes them to different subunits of the company to detect where leaks are emanating from.

December 3, 2010 3:24 pm

This discussion, it appears, has two sides:
1) What is “private” and must be protected?
http://www.danah.org/papers/talks/2010/WWW2010.html
Research enabling this should of course be supported. Although Facebooks of today make even this hard.

2) What is not, and hence requires more transparency?
Any public body like a government has no right to privacy, at least from pov of its citizens. We will get to other countries’ citizens in a minute.
Why should political boundaries even exist if not for financial dominance (exchange rate, trade, etc)? The internet is a great leveller because knowledge and information flows freely. The internet therefore will make these institutions obsolete. Hence folks whose livelihood depends on the existence of these institutions have to fight back, such as politicians, state leaders, businessmen? Oh wait! aren’t these the guys fighting wikileaks today?

28. December 3, 2010 5:00 pm

Looks like the windup here is a basic moral principle, one with no religious-secular divide:

Deliberate proliferation of falsehood is wrong.

Not to mention counterproductive…

It would be interesting to see if the amount of ambient falsehood, in an organization such as Alice’s, can be quantified as an operating cost. The true data occupies one corner of a space that would have all the false variants. Thus we have at least an analogy to an entropy concept. The smaller the fraction of truth, the more energy must be expended to access and maintain its coherence, and likely this measure would involve units of entropy.

December 3, 2010 7:20 pm

Leaders of democracy lying to the people they lead is what WikiLeaks exposes. This sort of lying is basically the worst and most dangerous crime to civilization. If extensive enough, civilization will implode, as happened before, and will take down its pure theoretical servants with it. More lying would not solve the problem, of democracy being destroyed, but make it worse. Lying more would be an attempt to drown the fish into more water. Democracy without transparency, that’s called oligarchy at best, plutocracy at worst. So bravo WikiLeaks! And too bad for the double dealing, double faced dictatorship exposed by WikiLeaks: they are the first to silence and kill intellectuals.

This being said, disinformation, spreading deliberate lies to hide a truth, through sheer confusion, is nothing new, and has been used for millennia. There are countermeasures, especially in modern times, by figuring out when they original leak came out. For hospital records and other internet rumors on simple citizens, disinformation ought to work pretty well, though.
http://patriceayme.wordpress.com/

December 3, 2010 10:00 pm

The difference between leaking personal health records is that the activities in the leaked cables show illegal activities and violence perpetrated in our name and that we are paying for.

In contrast, if you leak my health records, it has nothing to do with you, so it is private. On the other hand, if wikileaks did an expose of health records at a hospital to show that the hospital was doing unnecessary operations to make money, or did an expose of health insurance files to show how it was denying necessary coverage, that would be a leak worth having. Hopefully, they would XXXXXX out names like they do now. But I would be happy with such an expose even if it did use my name, because of the future benefit that addressing such an issue would bring to myself and others. Also, in that case, use of my real name would make it easier for me to sue the hospital or health insurance company.

December 6, 2010 5:43 am

Sorry, but Wikileaks didn’t expose any governmental lies in the last weeks. Only secrets and private assessments. Claiming that people, or in this case governments, as representing the people, should not hold secrets from their enemies, is obviously absurd, and can be explained only based on anarchism or pacifism. Two inconsistent views.

December 3, 2010 11:28 pm

Umm… The most effective lies are built on grains of truth. People used to point to “big tobacco” as an exemplar of your technique. Nearly every major organization has used it for centuries. “Knowledge is power.” Secrecy and deceit are nothing new.

December 4, 2010 3:22 am

I think the problem posed in this form is probably insoluble. Maybe there is another way of thinking about this. How about finding ways to minimize the need for privacy. Make most things open and find ways to benefit/reduce harm from the openness. Sort of an optimization problem where you minimize the number of things that need to be protected. A good argument could be made that it is precisely the excessive number of secrets that leads to increased number of leaks, increased size of leaks, increased opportunities and things to leak and probably the amount of damage from leaks. The likely reaction of the US government – to make more things secret – will in a sense increase the ‘surface area’ needed for protection and will likely make the problem worse.

December 4, 2010 3:40 am

If the members of the organization can distinguish which documents are real and which are fake, that itself can be leaked and you’re back to square one. If they *can’t* then the documents are worse useless to the organization and shouldn’t be stored in the first place.

33. December 4, 2010 6:01 am

Two points. First, even if it is correct that if you react too late to a leak then the strategy of leaking false but similar information won’t work, it would still be possible to use such a strategy preemptively for very sensitive information. That is, in your organization’s database you can keep information that is 99% false (and have a private protocol for sifting out the 1% that is correct). Then if a leak occurs you can say, “Ha ha, the chances that the document you are looking at is genuine are one in a hundred.”

Second point: although not strictly relevant, I can’t resist offering the following link.

http://www.bbc.co.uk/news/magazine-11887115

• December 4, 2010 9:13 am

Gowers asserts: It would still be possible to use such a strategy preemptively for very sensitive information.

Tim, your strategy leads to a maze of mirrors, that (hilariously) explains many aspects of our modern world!

Suppose in general that a simple technology exists for <harmful activity>. To mitigate the risk of leaks, one releases thousands of infeasible descriptions of <harmful activity>. The existence of these thousands of leaks then provides compelling evidence that <harmful activity> is feasible. As Krusty the Clown often says: “Uh Oh … that’s not good.”

Hmmm … from this principle we can conclude that (1) sasquatch is real, (2) red mercury stabilizes thermonuclear implosions, (3) breaking RSA is easy, and (4) there really are alien spaceships at Roswell … since all of these memes exhibit the amazing durability that is associated to disinformation methods.

Thus, a severe long-term downside to disinformation methods—no matter how mathematically elegant those methods may be—is that they foster a world in which conspiracy theories are a near-optimal cognitive strategy.

These considerations provide further justification for Ken Regan’s principle that Deliberate proliferation of falsehood is wrong … and yet Mark Twain’s essay On the decay of the art of lying makes some mighty good points too!

• December 13, 2010 2:39 pm

Isn’t the leaks of “private protocol” the crux of the problem? What you’re suggesting is nothing more than limiting the potential points of leaks. It’s the age-old trade-off between ease of access and security.

34. December 4, 2010 7:36 am

No matter what you think about the leaks, at least it brings some fun. 😀

December 4, 2010 8:19 am

It can easily be “stopped” by feeding it poison pills, i.e. disinformation, propaganda, too-good-to-check info. When it is nothing more than a gossip rag with no credibility it is dead. Or has that already happened?

December 4, 2010 10:16 pm

It’s an interesting idea, perhaps for personal information revealed on social networking sites, but I don’t see how it would work for something like a leak. Once both document sets are released WikiLeaks and/or journalists can simply identify an issue on which the two sets disagree and independently verify what actually happened, thereby revealing the false document set. And somebody trying to publicize false documents is a big story in itself, so it can backfire spectacularly. Plus, credibly distributing the fake document set is a non-trivial task – WikiLeaks only distributes documents once it’s able to ascertain their authenticity, and journalists would almost certain prefer the already vetted information to an anonymously delivered stack of documents or, even worse, an official leak.

December 5, 2010 4:34 am

There is a theoretical limitation to the success of this leak mitigation strategy and it resides in the requirement you stipulate that the fake documents need to be realistic enough to be believable. You recognize that this is such a major task that it will need to be automated – I believe that the density of false documents that would be required to be generated would materially impact on the functioning of the administration that is attempting to manage them.

38. December 5, 2010 12:38 pm

It’s a hardware problem. People whose lives are put in extreme jeopardy and use certain hardware for a living will no doubt put it to use to eliminate the current perpetrators and discourage future copycats

December 6, 2010 6:04 pm

To quote from a New Yorker article by Raffi Khatchadourian on Wikileaks:

“Moreover, at any given time WikiLeaks computers are feeding hundreds of thousands of fake submissions through these tunnels, obscuring the real documents”

“these tunnels” being the secure transaction pathways used for submissions. Not unlike the idea proposed here.

40. December 7, 2010 9:14 pm

No desire in containing wikileaks, but interested in the underlying problem. It appears that with enough resources you can post enough disinformation on the web interspersed with the legitimate leaks. The impact of disinformation should undermine the legitimacy of the original wikileak publications. The one well know source of the leaks need to be taken offline though.

December 8, 2010 1:24 pm

As discussed in his papers (see analysis at http://zunguzungu.wordpress.com/2010/11/29/julian-assange-and-the-computer-conspiracy-%E2%80%9Cto-destroy-this-invisible-government%E2%80%9D/), Assange is trying to make the authoritarian system dumber, so that it can’t think as well. I think your idea about generating false documents and leaks is interesting, but it probably has roughly the same effect. The less even the insiders can trust what are the real documents, the less well the authoritarian/conspiratorial regime can function.

December 9, 2010 4:33 am

Unfortunately Dick’s instincts towards restricting information flow is clear and reflects an earlier experience I had where a comment on advertising on this blog was censored. Of course, following on in this vein I doubt this comment will be published.

To compare keeping the leaking of damning government documents with hospital records is disingenuous at best and is an example of the proposed obfuscation (surely a simple application of the public interest test disambiguates?).

I suspect this is a generational thing as much as anything with the older, established order wanting to maintain the power structures from which they invariably benefit ( another manifestation of which is that comments here are moderated compared with the more frequent instantaneous publishing on gen Y blogs – e.g. Scott Aaronsen).

It is quite striking how the degree of opposition to Wikileaks is in direct proportion to the loss of power that it threatens. At one extreme we have the most extreme opposition coming in the form of assassination proposals from members of the political class seeing their opportunity to lie ebb way. Of course this is an example from the much less extreme end and it’s intellectualizing, balanced canvasing (not to mention other posters) reassures that not all Americans are mad!

P.S.
What an odd world we live in – I’ve just had my mastercard donation to wikileaks blocked but apparently can still donate to the KKK !?

I should stop now given this has no chance of leaking onto this blog!

• December 10, 2010 9:28 pm

Comments here are only very lightly moderated. I guess you thought mentioning the KKK would do it, but it would take advocating the KKK… There’s also an automatic hold on any first post from a new ID.

43. December 9, 2010 4:51 am

According to the theregister http://www.theregister.co.uk/2010/12/08/mastercard_downed_by_hackers/ some group called “anon” hurted Mastercard relatively badly because of WL.

Suppose you magically kill WL. Probably this will piss off many a person. Are you comfortable with pissing off many a person?

December 9, 2010 5:02 am

Ok, maybe I spoke too soon as it looks as if one of my favourite blogs now has no distracting google ads – brilliant!

Sorry if the tone of the last blog sounded a tad aggressive – never mind facebook – is that automated obfuscator available for blog posters?

December 9, 2010 2:05 pm

Ok, maybe I spoke too soon…

May be, may be not, I have myself a comment “awaiting moderation” if it doesn’t show you may have to downgrade your optimism a notch.

December 10, 2010 12:02 pm

Yup, my comment was “moderated off” likely because it contained a “annoying” link.
Will Dick Lipton also find that a link to Daniel Ellsberg is unwelcome?

December 11, 2010 11:44 am

jld

The moderation is by spam filter system of WordPress.

December 11, 2010 1:41 pm

Then the “WordPress spam filter” is extremely suspicious and biased.
It removed, not a spam link, but a link to a pro Wikileak Russian nationalist which I will try to mention outside an A tag:
http://www.venik4.com/2010/12/network-attacks-against-paypal-are-effective/
Big brother is already taking care of YOU, lest you see “annoying material”, isn’t that interesting?
I have had other “mysterious deletions” from WordPress unbeknownst to the actual blog owners on totally different topics.
May be Orwell had a point?

December 10, 2010 1:10 am

“The diffusion of information and the arraignment of all abuses at the bar of public reason, I deem [one of] the essential principles of our government, and consequently [one of] those which ought to shape its administration.” –Thomas Jefferson: 1st Inaugural Address, 1801. ME 3:322

I would agree with that, and assert that a policy of regular dis-information, such that citizens of a republic have no idea what their institutions are up to, would be fundamentally corrosive to the very idea of a democracy built on an assumption of an informed electorate.

46. December 11, 2010 6:28 am
December 11, 2010 2:49 pm

I still like Barrack Obama and Hillary Clinton, because I and the people, who have nothing to hide, think, that the USA will change into a country, which can be trusted.
Wikileaks distributes meanwhile informations that supports this development.
The work of the politicians should be to look for the own inhabitans, but also for the people in other countries.
No government should be egoistic…

48. December 15, 2010 2:12 am

I’m surprised you didn’t suggest the idea of preemptively having fake documents lying around. In other words, store tons of fake documents with the real documents at all times. That way, someone without a special key indicating how to identify real/fake documents won’t even know that they’re leaking fake information. You can even give people who are supposed to have access the wrong key by default in case they leak the information that there’s a key. Problem solved… except for the ton of other problems that could cause, haha. 🙂

December 15, 2010 7:14 pm

If only there were a zero-knowledge protocol that could be used by the public to verify that a secret matter of national security was in fact a matter of national security, without revealing the secret. This possibility seems unlikely though, and there seems to be little interest in applying mathematics to fine-tune democracy.

I like the line of argument that not prosecuting publishers of leaks provided by whistle-blowers is “the best system we have” for ensuring that matters of national security are secret for a good reason.

December 20, 2010 1:10 pm

This post reminded me of the very cruel aspect of cyber-bullying carried out on social networks.
I am not imaginative on all arenas of bullying, but other than posting compromising or doctored information on the network, people have a wiki-like freedom in maintaining fake profiles.

To generate a fake profile, a group of n perpetrators (who are known by the world to be familiar to the target) befriend the fake version of their target on the social network and mimic the original profile in social content. They post spurious updates claiming the original account has been hacked or so. Additionally they send new requests to original friends of the original accounts. No actual hacking or leak is involved here, just doctoring profiles. Credibility to the fake profile is given by the familiarity of the perpetrators to the target.

Obviously the real identitites could be easily resolved by using a social equivalent of encrypting keys. But the bullying might do irreversible damage by then. Since all the perpetrators are human, CAPTCHA doesn’t work well in defaulting their actions.

How can social network founders strategically prevent cyberbullying?

December 21, 2010 8:48 am

The strategy has a flaw. If the original documents D1…Dn were released at time point T1 and Alice releases her fake documents F1…Fn at time point T2 and the strategy is generally known it can be deduced that with high propabilty D1…Dn are the real Documents based on the time difference in release.

December 24, 2010 11:43 am

Anyone who believes transparency puts us in greater danger than secrecy is insane.

Only about 1% of these documents have been released, and here some people are attacking the messenger instead of addressing the message, and that is what this is coming to, a gut check for humanity:

Do we continue with “business as usual” or do we collectively stand up and change the way business is done.

I welcome it.

December 24, 2010 7:28 pm

In my conversations with mathematicians who have worked for the defense industry, each one spoke of the need to protect “our interests”, while taking the pronoun “our” and terent of “interests” for granted. I submit that none of them were capable of defining those “interests” with any degree of precision–certainly not with any pretense of mathematical precision–and that before any attempt is made to design methods to protect “our interests”, that they attempt to define what exactly those interests are, whether these involve violations of moral rules, and if so, why such violations are justified.

My “line in the sand” argument is this: not prosecuting publishers (as opposed to whistle-blowers who engage in civil disobedience) of classified documents is the best system we have to ensure that matters of national security are secret for valid reasons. The onus rests with anyone who disagrees to invent a better system.

December 24, 2010 7:30 pm

Typo: the referent of ‘interests’

February 14, 2011 7:18 pm

Touche’ my friend. Only if you something to hide [like illegal/immoral activity or things the voting public would not like] is secrecy better. Yes there some thing that should be secret, like plans for a nuke; but that is the exception and not the rule. Wikileaks has not released anything that threatens the US national security, they have however have released plenty that threatens criminal [politicos] and their plans. Thats the real problem. Substitute ‘criminal’ for ‘national’ in regards to talk of harming national security and you will have the real picture.

December 25, 2010 5:03 am

in 8 years, exactly on Christmas, Assange is declared a hero and you are declared a bad man?

btw, you can search for the meme “one man’s hero is another man’s terrorist”

54. January 6, 2011 1:24 am

The only way this would work would be if the fake documents were leaked first. What if an agency like NSA with thousands of employees, were to create a set of thousands of edited documents with the really incriminating stuff removed, and then circulate that to the all of the lower level and honest people. Of course you would have to includes some stuff that might be embarrassing, but you could rest easy knowing that the proles would be distracted and entertained forever by thousands of pages of gossip. Add in a few documents that seem to validate whatever policy you want to protect and then all you have to do is get someone to leak the package.

I think that there is a good chance that this is what just happened. Perhaps if Wired were to release the entire conversation between Lamo and Manning we might be able to validate (or otherwise) this suspicion.

February 14, 2011 7:08 pm

Excuse me but when did the 1st Amendment become an attack on America? Wikileaks has done nothing illegal but it certainly sounds like you are planning to. The old ‘the end justify’s the means business’. Perhaps you would be better suited to a life in Iran or China than a free and open society.

56. February 25, 2011 2:15 pm

This is not a new strategy. Applying it to governments is, however, very unethical. The government can be caught keeping secret something that people want to know, or information that citizens wish hadn’t come to light may come to light. In either case, the response cannot be allowed to be lies, whether or not they are generated by an algorithm.