A number of recent stories illustrate the possibilities and perils of 21st-century information technologies. I recently discussed Hillary Clinton’s email troubles, but there are other recent stories that continue and illustrate this trend.
In the last couple of days, we’ve learned that the database of the Democratic National Committee has been hacked, apparently by hackers associated with the Russian government. And apparently, they were able to root around in there for a year or more before they were detected, and collect messaging and email traffic, the DNC’s database on Donald Trump, and who knows what else. Likewise, there is at least some evidence that Clinton’s server may have been similarly hacked. She received, and apparently replied to, a phishing email sent from the account of an advisor and may have clicked a link that unleashed a phishing program. And later, some email from an advisor wound up on a website purportedly associated with the Russian government. Exactly what transpired is unclear, but obviously, some damaging possibilities exist.
These, among many other examples, illustrate some of the perils of modern information technologies. They are certainly very useful for communicating and for gathering very large amounts of data, and for analyzing that data. But, not everyone who wants to see that data is a good guy or has a right to see it, so if it gets hacked, a great deal of damage can be done.
So, now we’ve got all this big data, and we’re properly concerned, as illustrated above, that it can be misused, so we impose an assortment of controls and privacy rules and maximum retention periods upon it to prevent that misuse.
But then, in the aftermath of the recent terror attacks in France and Belgium, we learn that the authorities’ ability to track terrorist suspects and investigate terror attacks is hampered by the lack of information flow between the investigative bodies of the various European states. Their privacy laws restrict this flow to protect individual privacy, but an unintended consequence of that is that among the people whose privacy is protected are terrorist suspects and actual terrorists. So, they are able to use the shield of the law to hide from the law. And even when it’s there and theoretically available, the information often doesn’t get used the way it could be and probably should be – Orlando shooter Omar Mateen was the object of the FBI’s attention for terrorist ties, and had what appear to be several other red flags in his past, but passed at least two background checks easily, had a security guard license and was able to purchase firearms legally.
These are only the most recent examples of this – on the privacy front, the National Security Agency (NSA) had a number of programs for very large scale data collection about all sorts of things that on one hand, significantly impinged upon the presumed privacy rights of a great many people, but on the other hand, if you ask anyone in the know, they will tell you that these programs probably prevented a number of terrorist attacks in the United States. These programs have been significantly curtailed, which may well yield more terrorist incidents in the United States similar to those in Europe. We’ve already seen a few. It’s impossible, for me at least, to confidently blame them on the curtailment of the NSA programs, but it’s certainly at least possible, and maybe likely, that the curtailment of those programs will result in more terrorist activity in the United States. On the data security front, the number of really large-scale data breaches is too large to list here: several at Sony, Target, Anthem, MySpace, an assortment of government agencies . . . the list appears endless.
It does not seem as though there can be a perfect solution to these problems. Collective security necessarily requires the limitation of some individual rights, including rights of privacy. And making information available enough to be actually useful necessarily limits the amount of security and protection that is placed upon it. So in both of these cases, any choice you make necessarily involves some limitation on some others of these factors – it simply cannot be avoided. We could undoubtedly create very safe and secure countries if we were willing to tolerate very totalitarian and intrusive methods to achieve them – universal phone taps and email and messaging monitoring, expanded search and seizure rights by the authorities, restrictions on travel and so forth. But we will not do this, because we won’t tolerate the complete abridgment of what we see as our innate rights in the name of collective security. Nor will we really tolerate complete information security – we like to have our credit card on file with Amazon, because we like to be able to purchase things very easily. As it is, we’re annoyed if they make us type in a CAPTCHA or the 3 digit verification code on the back of the card. We’ve gotten so used to the convenience of these transactions that these very minor things start to become irritants. It’s hard to imagine most people tolerating a great deal more in the way of verification in the name of information security. Nobody ever thanks the government for long airport security lines either, even though they’re about to get on a plane that might be blown up otherwise. It just doesn’t work that way.
Call all of this the Dilemma of Democracy and Information. Those of us fortunate enough to live in democratic societies value that fact and the rights that this gives us very highly. And we want to protect those rights. But when they wrote the Magna Carta or the United States Constitution, that was a pretty straightforward thing to do, comparatively speaking. 21st century information technologies have very much complicated this discussion. Our rights have become inextricably intertwined with the information that is being collected about us and the conclusions that can be mined from that information. And the information that is being collected grows constantly in both size and complexity, as does the ability of the collectors to mine it. So we cannot really talk about our rights without necessarily involving that information collection, and what will be done with that information.
And therein lies our dilemma: if we insist on complete privacy for ourselves and our information, we necessarily sacrifice some things – including some lives – in return for that privacy. And if we insist upon complete security for our information, we likewise sacrifice some things that we might otherwise have. And there is no perfect answer here, only a set of trade-offs, each of which is imperfect. Whatever we decide, we have elected to sacrifice some of one important thing, and in return, get more of another important thing.
So we enter into a great debate: how much of what in our information lives are we willing to trade, and what must we get back in return to justify that trade? We individually can and do enter into that debate every time we surrender information to a website, apply for a government benefit or do a hundred other things. But there are a thousand other ways information is collected about each of us that we have little or no control over, so someone else is making that choice for us – a legislature, a government agency, an employer, a social media site. To avoid these is to withdraw from society, and most of us are unwilling to do that.
And that means we must continue that debate as societies, making collective decisions that will necessarily be imperfect but that cannot be avoided. Our goal should not be to avoid the debate – it cannot be avoided, because inaction is a kind of action. What we must do is avoid the temptation to take simplistic or black and white positions. There is no easy position, nor can we go back. The information technologies that allow us to do these things, and the way virtually all of us do use them, makes it impossible to go back. The Pandora’s Box of information has been opened, never again to close.
The way forward, however imperfectly, lies in front of us.