Fort Meade, We Have a Problem …
Originally published on TPM Longform, September 30, 2013.
This past July, the highest profile hacker conference in America – known as DEFCON – made a show of publicly disinviting all federal employees from attending. It was a shock, and a reversal: Last summer not only were the Feds in the audience, they were on the stage: National Security Agency chief General Keith Alexander was the keynote speaker; he had come, he said, to “solicit” support for cybersecurity. “You have the talent,” he said, and called for more sharing between private companies and the government. (He also denied that the NSA keeps a file on every U.S. citizen.)
Only a few found outside of a paranoid fringe found Alexander’s remarks terribly controversial at the time. That has now changed.
Gen. Alexander’s appeal came from a deep understanding that the NSA cannot get very far without technologically savvy, innovative people designing secure systems, breaking encryption, and creating offensive cyberweapons. Some call them hackers: people who are obsessed with problem solving, and who love to take apart technology to see how it works and, if they’re really good, how it can work better. The world of hackers – or coders, to use a softer term – can be difficult for outsiders to understand – an obsession with cryptography, Linux and homemade bespoke software, and a preference for cleverness over user-friendly design doesn’t exactly endear them to mainstream America. There’s another problem hackers have
But when it comes to the potential for CyberWar, Washington is utterly reliant on hackers to defend against other hackers. There’s just one problem: if they ever had a dalliance with their mainstream security handlers, hackers are now firmly falling out of love with Washington.
Patriotic Hackery
In the explosion of computer security concerns after the 9/11 terrorist attacks, well-remunerated jobs drew large numbers of security researchers – especially hackers – in response to a growing anxiety in Washington about the potential for a cyber attack. Jobs were plentiful, and the computer security (often shortened to Infosec) community began to grow symbiotically with the feds. Within a week of the World Trade Center crumbling, the FBI was issuing warnings against “vigilante hacking,” when a group of hackers announced their intention to attack countries they thought supported terrorism. During the run-up to the Iraq War, the FBI issued a public warning about “patriotic hacking” in an attempt to forestall cyberattacks on Iraqi computers. By 2012, the Department of Homeland Security was openly advertising for “patriotic hackers” willing to work on securing vulnerable computers. In 2010, DEFCON 18 even had two panels called “Meet the Feds,” where officials from the Department of Homeland Security, the Air Force, Treasury, FBI, NASA, and National Security Council spoke about computer security topics. (One question, poignant in hindsight of this summer’s information revelations, was “How do we conduct robust continuous monitoring across a large multi-organizational enterprise yet stay within the constitutional requirements for privacy, civil rights and civil liberties?”)
Cyberwar was good business. Hackers raked in lucrative contracts while the feds built vast surveillance systems and frightening cyberweapons. But there was always a tension between the Hacker world and their potential employers, an acrimony that came to light this summer when Edward Snowden, until June a hacker working on contract for the U.S. government in Hawaii, leaked highly classified NSA documents and methods. Snowden’s electrifying interview in June quickly went viral. “I don’t want to live in a world where everything that I say, everything I do, everyone I talk to, every expression of creativity or love or friendship is recorded,” he said.
Snowden was expressing a fear, broadly felt by many in the tech community, that the very systems they had helped to create over the last four decades were being turned toward something evil. And though he’d been vetted, and though the government trusted him to uphold a standard of secrecy, he felt his moral duty was not to his employers, but to his conscience. In the process he revitalized a long-running debate within the computer security industry: is it ever ethical to work for the Intelligence Community? And he raised awareness, on the national security side, of another problem: can you ever trust that a hacker, by his or her very nature, won’t go rogue if he or she doesn’t agree with a choice the government has made?
When you think the way you code, in a binary of good and bad, encountering the real world can be an unpleasant experience.
For the Intelligence Community, Snowden was a scary example of someone who once believed in the government’s position but had turned, suddenly it seemed, resolutely against it. Technology website Ars Technica dug up enthusiastic old posts Snowden had left on their discussion forums, dating back to 2001. As recently as 2009, he mused that people who leak national security secrets “should be shot in the balls.” Then he began to shift, noting in those same forums how many corporations were enabling government spying. “It really concerns me how little this sort of corporate behavior bothers those outside of technology circles,” he wrote.
Snowden’s very public demonstration that the government cannot control the activities – let alone mindset – of the hackers it employees has gained him plaudits, not recriminations, in the broader coder community. “A lot of people at these [hacker] conferences make tools that find their way to the Intelligence Community,” Chris Soghoian, principle technologist at the Speech, Privacy, and Technology Project at the ACLU says. “And the last two months havemade a lot of them unhappy.”
“I grew up with the understanding that the world I lived in was one where people enjoyed a sort of freedom to communicate with each other in privacy without it being monitored,” Snowden told Guardian columnist Glenn Greenwald. Snowden’s exposure of the constant, pervasive presence of government is starting to revive old anti-government antagonisms.
As Snowden’s leaks continued over the summer, his support in the tech community grew along with broad antipathy to the NSA and the Intelligence Community and a growing skepticism about working for the government at all. “I think many people feel betrayed by and distrustful of the NSA,” Moxie Marlinspike, a computer security researcher and cofounder of Whisper Systems, an encryption service for Android mobile phones.
This shift in attitude – coming right when computer security challenges, and the budgets to manage them, are peaking – could pose a serious challenge for the government. “I haven’t seen this level or sort of animosity since the 90s,” Jeff Moss, founder of DEFCON, recently told Reuters. Clearly the post- 9/11 honeymoon between hackers and the Intelligence Community had come to an end.
A cultural challenge for the Intelligence Community
The InfoSec community is rife with people who espouse a technology-enabled, anti-authority civil libertarianism, a worldview at best constantly at odds with working for the government. Often shorthanded as technolibertarianism, it is best understood as a direct descendent of 1960s counterculture. Julian Assange, founder of Wikileaks, explained this in a 2006 essay, in which he argued that government itself is, by definition, a conspiracy, and the only way to disrupt the conspiracy is by reducing its capacity to conspire through leaks and strong cryptography. Basically, if the government cannot keep secrets, but the people can, then a balance can be found permanently weighted on the side of individual liberty.
The tension between computer security researchers and the government using their inventions for surveillance dominated DEFCON this year. Alex Stamos, the CTO of Artemis Internet Inc ., an internet security company, gave a heartfelt talk on the ethics of their industry. “Who is getting your bugs,” he asked the assembled hackers. “What are they doing with them? Whose goals are being accomplished?”
Stamos’ concern was that the hacker ethos, to the extent one even exists, inspires many of the attendees to vigorously research security systems to see how they work. He noted that the seemingly paranoid fringe of the Infosec community, which complained bitterly of government surveillance, was proven correct. “Think about your moral limits before you reach them” he urged everyone.
Those moral limits aren’t always clear, however. The people developing this technology, who see it as an instrument of social change (even revolution) are also acutely aware it can be used just as easily by the government for oppression.
Many of the privacy innovations in use on the internet today – symbolized by a small lock at the bottom of most web browsers – came from an early group of programmers called Cypherpunks. Eric Hughes, a mathematician and one of the cofounders of the Cypherpunk ideology, wrote the movement’s influential 1993 essay, “A Cypherpunk’s Manifesto.” Privacy, according to cypherpunks, is not just secrecy – it is “the power to selectively reveal oneself to the world.”
In the cypherpunk universe, the NSA embodies everything they stand against: it is closed, proprietary, and has computers so expensive that only a government could possibly build them. For a technolibertarian it is a perfect dystopia, not only big government, but anti-privacy. It is a simplified utopian vision of the world: it is how hackers in prison can compare themselves to Jews persecuted by Nazis, or treat the internet as if it doesn’t require a physical reality – routers, fiber pipelines, computers, microchips, and monitors – to make real.
When you think the way you code, in a binary of good and bad, encountering the real world can be an unpleasant experience. And now the community is grappling with the messy complexity that the tools they build, like all tools, are being used for good and evil and many things in between. “Plenty of people in the hacker community, including myself, probably have a lot less respect for people working to assist the NSA at this point,” Marlinspike says.
In part the problem comes from what the government needs in terms of secrecy. Coders believe in a world that might embrace the option of privacy – as Eric Hughes explained it “selective revealing oneself” but literally eschews secrecy (to cyberphunks that’s “something one doesn’t want anybody to know”). While the Intelligence Community is obsessed with secrets and limiting access to information, the InfoSec community, those cyberphunk kids, are obsessed with transparency. It is a tension that makes their relationship fraught at best, dangerous at worst.
There is, at its heart, a contradiction in the cypherpunk ideology: the world cannot simultaneously protect everyone’s privacy and also be totally transparent of agenda and motivation.
There is, at its heart, a contradiction in the cypherpunk ideology: the world cannot simultaneously protect everyone’s privacy and also be totally transparent of agenda and motivation. But the square is partly circled when you realize that ‘privacy’ is the realm of the individual while ‘secrecy’ is the domain of large institutions – governments and large corporations. Unsurprisingly for a libertarian ethos, it is at heart a debate and calculus about power. But in reaching out to these technolibertarian coders for help, the government is in effect asking them to betray their own ethos.
In the 1990s, Hughes thought cryptography, and the privacy it affords, would usher in a new age of progressive change. But somewhere along the way, progressive change became disruptive change. Some of that came from the work of Julian Assange. Like Snowden, Julian Assange has led the charge against the government’s efforts to both monitor Americans and be cloaked in secrecy – and use hackers to do their dirty work for them. Assange coauthored a book in 2010, Cypherpunks: Freedom and the Future of the Internet. In it he argued that privacy technology, like strong cryptography, is how the internet as a whole can both protect individuals from the state and, ultimately, disrupt the state itself. The internet, which Assange calls “our greatest tool of emancipation,” has been “transformed into the most dangerous facilitator of totalitarianism we have ever seen.” Issuing a “call to cryptographic arms,” Assange then sets out cryptography as the only possible bulwark against tyranny.
These days the new paranoid style of cypherpunks is spreading through the Infosec community. Though support for Assange himself has ebbed and flowed, his hyper-individualistic, anti-authority, utopian concept of government and liberty is displacing the more traditional utopian ideals of the early hacker movement. And that intellectual foundation, according to Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy at McGill University, is “a culture committed to freeing information, insisting on privacy, and fighting censorship.”
For a time, the rising star of this new culture was Aaron Swartz, a computer researcher who helped invent the RSS format at the age of 14, then helped create Creative Commons (an open copyright scheme better suited to the Internet than traditional copyright) and later Reddit. He was a hacker in the most classical sense of the term: technologically savvy, incredibly smart and utterly devoted to making systems work better through technology.
The hackers who helped build the World Wide Web, like Tim Berners-Lee, also saw him as the future of the internet. As Noam Schieber put it, “What these adults saw in Swartz was someone who could realize the messianic potential of the Internet, someone who could build the tools that would liberate information and keep it free from the corporations and bureaucrats who would wall it off.” He was open-source, pro-privacy, and opposed to corporate or government ownership of cyberspace.
While on a fellowship at MIT, Swartz had access to the entire JSTOR academic catalogue, the digital library of virtually all academic articles in the humanities which is normally accessed for steep fees paid by universities. He didn’t believe it should be closed – didn’t believe that intellectual property should be controlled in that way – so he downloaded every document and made them available to anyone on the internet, calling it his “Open Access Guerrilla Manifesto.” The feds promptly arrested him. The subsequent investigation, indictment and relentless (some thought vindictive) prosecution led to Swartz’s suicide by hanging in January of 2013. In the months since, his death has become a rallying cry in the community.
Shortly after Swartz’s suicide, Wikileaks revealed he had been a “source” for the organization and in communication with Julian Assange. In fact, if it weren’t already clear: Assange is at the center of many of the most important incidents of geek culture butting up against laws and government interests in recent years.
“Julian Assange is an important cultural catalyst,” Gabriella Coleman told me in July. Assange helped the hacker community realize its potential for political activism, prevailing upon their natural tendency for suspicion of the government, and natural cynicism that the government is actually working with the general public’s best interest at heart. Coleman explained that prior to the rise of Wikileaks many hackers thought they could leverage their security and openness ideals to influence some public opinion, but their vision was never very large. Assange showed that the right kind of hack, the right kind of leak, could spark massive transformative change. Assange instantly became a perfect anti-hero to the government patriotic hacker types – brash, uncontrollable, imperfect, and especially attractive to young people who might be recruited for government work.
It was through conversations with Assange, according to government prosecutors at her court martial that Chelsea – formerly Bradley – Manning began her journey to become the country’s biggest national security leaker in history. The two began chatting as early as November of 2009, and over the next six months their discussions ranged from the politics of the war to how Manning could cover her tracks.
Manning was already a malcontent – she had already thrown chairs at her fellow soldiers – but it seems clear Assange played a role in focusing that inchoate discontent and anger into taking direct action through leaking. Then, just days after Edward Snowden revealed himself as the source of the leaks revealing NSA spying activities around the globe, Assange again popped up, saying he “had been in indirect communication” with Snowden. Always, it seems, Assange is there encouraging dissatisfied young people to buck authority and leak sensitive files, and, ultimately, to upend governments globally.
Tech has a troubled, complicated political history
It’s peculiar, in many ways, that hackers ever came around to help Washington. For decades, Silicon Valley and the hacker culture behind it has been dogged by the charge that its idealism, a sort-of techno-utopianism, is all just barely-disguised and generalized hostility to government. In hackerdom the web was a virtual wild west where governments couldn’t control behavior and fierce individualism ruled. Cypherpunks took libertarian individualism to an extreme, limiting their communities only to themselves and their connections to the outside world to their computers.
Paulina Borsook, a staff writer for Wired, saw that isolation as deeply troubling. Describing a culture of “tremendous self-insulation,” Borsook, worried as far back as 1996 about how they were going to affect the country. “What will result if the people who want to shape public policy,” she wrote of the seemingly limitless need to bring hackers into government “know nothing about history or political science or, most importantly, how to interact with other humans?”
Yet that libertarianism has its roots back to the 1960s counterculture movement that found a natural home in the budding high technology scene of the San Francisco area. As the counterculturalists reached for new forms of human consciousness, they increasingly used technology to enable it. A massive community grew up around the Whole Earth Catalog, a National Book Award winning counterculture publication that combined (in their words) “the rugged individualism and back-to-the-land movements of the Sixties counterculture” and “the nascent global community made possible by the Internet.”
The Catalog was an early presence online, too, as the Whole Earth ‘Lectronic Link, or WELL. Along the way Stewart Brand, the founder and editor of the Whole Earth Catalog, received a $1.3 million advance in 1983 for the Whole Earth Software Catalog, which his publishers hoped would encourage a community around software similar to the one he had enabled in the 1960s.
Brand was no stranger to using computers to advance his social ideas – he wrote about hackers for Rolling Stone in 1972, where he described them as a “mobile new-found elite” building the computers – and computer games – of tomorrow. Yet the hackers Brand profiled mostly worked on the early incarnation of the internet called ARPANET network, named after the Pentagon research branch that invented it. So he saw, early and up-close, the reluctance many contractors felt at having their expertise put to use by the government.
“The resistance may have something to do with reluctances about equipping a future Big Brother and his Central Computer,” Brand wrote. “The fascination resides in the thorough rightness of computers as communications instruments, which implies some revolutions.”
Forty-one years ago, when the internet was little more than an exciting science experiment funded by the Pentagon and dramatically expanded upon in research universities, the fundamental tension between researchers utopian goals and fears of government exploitation defined much of the community.
The U.S. Army funded the development of the first recognizable modern computer, called ENIAC [Electronic Numerical Integrator And Computer], in 1946. ENIAC calculated artillery tables for a variety of weapons (Los Alamos first used it to calculate the yield of the first hydrogen bomb). But in designing a flexible machine – a thousand times faster than the most advanced mechanical computational device at the time – they had created something with much more promise than simple calculations: though laboriously slow, it could be programmed to do astonishingly complex tasks. The military-funded ENIAC in many ways birthed the modern computing era.
In 1962, the Advanced Research Projects Agency, now known as DARPA, wanted to connect the computers used at the Pentagon, Strategic Air Command in Omaha, and the North American Aerospace Defense Command (NORAD), in Colorado. A year later, ARPA invented packet switching (a method for transmitting information over a network), which is still the foundation of the internet. In 1968, ARPA funded the building of the first network. By 1972, the TCP/IP protocol dramatically improved the reliability of ARPANet, allowing new networks to link up. The internet was born.
What this means for the current tension between many computer security professionals and the U.S. government is profound. The internet was invented and funded by the military. At the same time, from its earliest the days the internet was never exclusively militarized. “I think it’s an over-generalization,” Marlinspike, the security researcher, says, to suggest “that the security community has traditionally had a good relationship with the Intelligence Community as a whole. Many of the security community’s inception stories also begin with being at odds with entities like the NSA.”
Indeed, the two sides – researchers and the military – have oscillated between trust and mistrust, cooperation and confrontation, for decades – a fact made worse by the 21st century explosion of government contracting.
The Inherent Weakness of a Contracted Intelligence Community
The rapid rise of cyberwarfare offices in most military commands and intelligence offices – along with the creation of the military’s Cyber Command in 2009 -created enormous pressure to fill jobs with skilled IT and security workers. But despite years of effort to reverse the trend, the Intelligence Community is still heavily contracted out. This is truer now for cyberwarfare programs than for the more traditional analytic offices, where the need for rapid expansion has outstripped the government’s ability to hire internally for positions. That means the government, and the NSA in particular, has had to dip again and again into this pool of potential workers – the hacker community writ large – who are themselves conflicted, at best, about the goals of the work for which they have signed on. Snowden made a hefty six-figure salary while living in Hawaii and working for Booz Allen Hamilton on contract at the NSA. He’s hardly alone.
Government hiring is too slow – and often pays far too little – to attract many competent security researchers. Yet technical skills are in incredibly high demand right now. And because recruiting and training takes so much time, contractors must fill the gap.
Snowden’s top secret clearance was processed by a contractor, USIS – which is now facing a grand jury investigation over whether it took shortcuts in its investigations.
Snowden is the most high profile, but all such new recruits present a challenge to government officials: finding out who among the contracted recruits either doesn’t get, or actively rejects, the institutional code of conduct around secrecy. Members of Anonymous, the hacking collective responsible for denial of service attacks against corporations, claim to have “infiltrated” the U.S. Army. Whether that is really true or not, the risk posed by such infiltration is only going to increase.
For the Intelligence Community, Snowden was a scary example of someone who once believed in the government’s position but had turned, suddenly it seemed, against it.
If contractors really are subjected to rushed background investigations, yet are paid vastly more than their federal counterparts, it would create a perverse incentive to hire skilled, but not necessarily trustworthy, workers. The government certainly thinks that what happened with Snowden. In a Congressional hearing in June, Patrick McFarland, inspector general for the U.S. Office of Personnel Management, said that he believes Snowden was improperly vettedby USIS.
People with top secret clearances are supposed to undergo a “Periodic Reinvestigation” every five years. Often it is as simple as re-filing a common application form, though it can sometimes require a polygraph.
According to a report in McClatchy, more than 73,000 people undergo polygraphs each year. It is a deeply controversial and invasive process – normally, people who have already been cleared face substantially less scrutiny for their clearance renewal. But polygraphs are always painful – digging into shameful, deeply personal incidents to gauge a person’s vulnerability to coercion, temptation, or bribery. Nevertheless, it is a cornerstone of the counterintelligence process – much of the threat assessment for employees handling sensitive material takes place during those screenings.
Notably, everyone at the NSA and CIA undergoes polygraphs, including Edward Snowden. Investigators are unsure how the process could be reformed to catch the next potential leaker – or the next potential person to find himself disgruntled enough with the material he comes across, or the highly delicate information, that he takes it upon himself to employ the weapon of exposure rather than abide by the rules to which he has agreed to abide.
Funding cuts, caused by sequestration, are forcing the suspension of all kinds of renewal investigation for highly cleared contractors. It’s unclear how any oversight or accountability can take place if the periodic reinvestigations are suspended – especially when the biggest clearance contractor, USIS, is itself being investigated for shoddy investigations.
Outside of periodic reviews, there is a growing paranoia within intelligence agencies – precisely the effect Assange hoped for in his manifesto about leaking. The so-called “Insider Threat Program” is the most visible. The ITP, as it’s called, requires millions of federal employees and contractors watch for “high-risk persons or behaviors” among their peers. Every employee faces a steep penalty, including possibly criminal charges, for failing to report any risky behaviors. The NSA in particular has instituted the “Two-Person Rule,” which requires two people always be present when top secret information is accessed. What that risk entails – a political ideology, mental instability, discontent – is worryingly vague. In the public description of the ITP, everyone and no one is a risk.
NSA Chief General Alexander has said the new rule “makes our job more difficult,” because it will slow down the analytic process. But speed is not the greatest downside to such paranoia.
While few see the Infosec community utterly rejecting the government, the government hasn’t begun to grapple with the cultural challenges posed by a generation growing up deeply distrustful of its own policies, surrounded by peers who are actively trying to subvert it, yet still dependent on the unique skills they’ve developed. It is a challenge unlike any other social force they’ve faced since World War II.
The collaboration between hackers and the government continues, shakily. Hackers get the mainstream legitimacy of government work, while the government gets to tap their expertise. Done right, it is a symbiosis that can protect the country while funding incredible innovation. But the problem is, even done right there is an inherent tension as to whether those workers believe in their work or if they will, like Snowden, become a law unto themselves, more faithful to the ethos of their rogue coding culture than that of the government that has employed them. They might also be seen as a check on government wrongdoing. But with every hacker a law unto him or herself, the unpredictability and insecurity is a problem for the security systems of the government and banks. Left unchecked, the cultural clashes between hackers and the government could leave everyone vulnerable.