Why Your Secure Building Isn’t

Better security through physical penetration tests

My book, Red Team: How to Succeed by Thinking Like the Enemy, provides the first in-depth investigation into the work of red teams in the military, intelligence, homeland security and private sectors, revealing the best practices, most common pitfalls, and most effective applications of their work. Below is an adaptation.

In the course of conducting interviews for my book, Red Team, I unintentionally broke into an allegedly highly secure government building. After initially failing to obtain a meeting with a senior official in a government security position, I requested that a mutual acquaintance pass along a short e-mail, from a Gmail account, describing my research project and questions that I hoped to ask. Weeks later, an administrative assistant reached out to me and let me know that this senior official had agreed to meet me in person. The administrative assistant and I spoke over the phone to arrange a time the following week, mid-morning at the senior official’s office. The assistant then sent me a confirmation e-mail with the location, different transportation options to get there, and a reminder to bring my government-issued ID.

The office building was a highly secure facility, set back more than a block from traffic, and ringed with blast walls, a series of controlled-access points, armed guards, surveillance cameras, and metal detectors. Once past the access points, visitors are required to show their IDs, have scheduled a meeting that appears in a shared internal database, get their photograph taken, receive a visitor’s photo badge that is always supposed to displayed, and, finally, have an employee escort them through the hallways.

After arriving five minutes late, I was waiting in a long line to pass through a metal detector when a security guard answered a phone call and then shouted a close approximation of my name. I stepped out of line to answer, and before I could say anything, she said, “Oh you can go ahead, they are waiting for you upstairs.” I walked to the front of the line, thinking that I still needed to be screened, but she simply waved her arm and declared, “No, no, you can just go around and head on in.” Next, I approached a front desk, which several armed guards stood behind, to show my passport, get my picture taken, and receive my badge. Before I got to the desk, a young man — likely an intern — asked, “Are you Zenko?” After I nodded affirmatively, he replied, “Okay, let’s go.” Not only was I never asked to show my ID, checked against the internal database, or provided a badge, but, before the young man and I walked away, a guard behind the desk handed me a slip of paper that mysteriously read: “SCREENED.” I placed it in my pocket. We then took the next available elevator to the senior official’s office.

After a two-minute wait, he and I were sitting together in a conference room. Nobody had verified who I was or even screened me for weapons or explosives.

Ironically, I could have been conducting a red team vulnerability probe, or “pen test,” short for penetration test, of the facility. It would have required little effort combing through publicly available databases to determine who was the best candidate to serve as a trusted intermediary between the senior official and myself. The technical skills required to obtain unauthorized access to the mutual acquaintance’s Gmail account, and then pretend to pass along an e-mail that purportedly came from me, are easily obtainable. By conducting simple reconnaissance of the facility, I might have recognized the vulnerabilities in processing visitors who arrived late for meetings with people pressed for time. Furthermore, I could have coerced or bribed the intern to vouch for me, or even placed a trusted accomplice in the internship program in advnace. I may have been able to obtain access to the shared internal database and create a “scheduled” meeting with the official. Finally, I could have determined that visitors received a “SCREENED” piece of paper and made duplicates of it, in case someone had stopped me unescorted. What happened in this instance was that I broke in, but only by accident. And while I found it troubling how relatively easy this was, it should have been in no way surprising. Moreover, in the private sector, the level of security for most buildings is far worse.

When you walk into most modern office buildings — whether a corporate high-rise, hospital, or casino — you expect and recognize the prerequisites of security. These include surveillance cameras, doors requiring an employee-access card, a magnetometer screening device, friendly greeters sitting behind the front desk to process visitors, and security guards observing the environment. When you see these ubiquitous symbols of security you might be reassured that the building is adequately protected from criminals, terrorists, or disgruntled employees. However, you would be deeply mistaken because the outward appearance of security is rarely correlated with the actual protection of that building, or the people and contents within.

Most companies spend the minimum amount possible securing their facilities because this funding is pulled out of immediate profit margins. The level of security rarely rises above the minimum insurance or government-regulated standards, as interpreted through industry-approved best practices, which, while worth adopting, are wholly insufficient for dealing with motivated and adaptive adversaries. The security personnel hired and trained to protect any facility are fixtures of the environment, and they simply do not think like an enemy or conceive of all the ways they could cause damage or break in.

The tactics and techniques that any motivated person could use to gain unauthorized access to a facility are freely available online. These include entering through the loading dock or entrance to an employee smoking area; through a locked door with help from an insider or via lock-picking techniques; by “tailgating” an employee with a proximity badge or swipe card (which themselves can be easily hacked) who is kind enough to hold the door open for the next person who happens to be struggling with several packages; or even by impersonating a “contractor” who arrives to fix the air conditioner, which hackers might have shut down remotely in advance so that an uncomfortably warm security guard is eagerly expecting someone.

As the former Army Delta Force commando and pen tester who goes by the pseudonym Dalton Fury describes it:

“Anyone can assess a threat whose face is smeared with camouflage paint and is running with an automatic weapon in his hand. It becomes much tougher if that same threat is an attractive female, wearing a body bomb underneath a fleece, with a pistol in her purse.”

Fury also relays stories of obtaining unauthorized access into secure buildings by dressing up as Santa Claus around the holidays, posing as the pizza delivery guy from a national chain, and even staging a horrific hunting accident outside of the front gate to lure a security guard away from their post. Once you become conscious of typical building vulnerabilities, and how routinely such tried-and-true tactics are successful, you begin to see poor security everywhere. Or, in the case of my unanticipated “break-in” to the government building, the elaborate construction of visual intimidation actually “feels” more like lax security, once you become aware of what that resembles.

Protecting buildings or facilities should be easier than protecting computer networks because they are tangible, and people experience and interact with them directly. A building’s management and security team presumably is expected to conceive of and implement sufficient security policies. Indeed, most security professionals falsely believe that their organization has a sufficient and integrated defensive strategy to address security threats, including from insider leaks, stolen assets or data, or physical threats to staff. A 2014 survey of federal employees found that 76 percent of employees thought their organization had adequately prepared them for security threats. These perceptions most likely do not reflect the truth.

It is the job of Jayson E. Street to show how these assumptions of security are false. Street’s is admittedly loud and impulsive, and his bio on LinkedIn lists him as “one of Time’s persons of the year for 2006.” The actual person for that year was “You,” so, hilariously, Street naturally assumed it meant him. However, he proudly proclaims himself a hacker, which includes his side job of conducting pen tests — or what he calls “social engineering engagements” — to rigorously test and improve building security for his clients. His presentations at security conferences are legendary for their humor, passion, and cheesy production values, and they effectively convey Street’s ultimate objective: educating pen testers and security professionals about their shared responsibilities of uncovering and patching vulnerabilities. “We made red teaming so rock star that everybody wants to be a red team ninja, but I’m here to help the blue team. So now I tell everyone I’m purple team.”

To drive home his point, Street makes his pen tests unsophisticated. “I limit myself to two hours on Google to gather intelligence on the client. The point is to show that anyone could do this to your institution.” Rather than develop an Ocean’s Eleven-like meticulous plan, he walks into a building through the door used by a company’s smokers like he belongs there — “just walking in through the door being my charming self.” He does so under the guise of several roles, including the “outsider” who shows up for an interview or appointment only to wander the halls to conduct surveillance, the “authority figure” demanding access to inspect something while pretending to take notes on a computer tablet, or the “technician” who enters the premises under the pretext of fixing something — for which he wears a shirt that reads: “Your company’s COMPUTER GUY.”

Like most pen testers, he has a 100-percent success rate in getting into secure facilities that clients hire him to assess.

But rather than relying upon exhaustive research or well-honed skills, Street highlights that “my best assets are bad impulse control, and a total lack of shame.” Unlike most other pen testers who employ more subtle methods, Street continues to escalate the engagement until he is eventually caught.

Like all pen testers, he carries an engagement letter and a business card with the chief security officer’s phone number on it, which a security guard can call to verify. If he does not produce the “get out of jail free card” fast enough, he could be beaten up or tasered. He will often carry two engagement letters, one false and one real. Or, he might tell the security guard, “I am doing a security assessment,” which will often prompt the guard to wave him past. Or, “If you let me go past, I won’t write up all of your mistakes in my report.” Finally, if an employee questions his presence, he will tell them, “Congratulations, I am doing a security assessment and you caught me; here, have a Starbucks gift card” and simply continue on his way without security being notified. These encounters are clear violations of an institution’s security procedures. They all get recorded by the hidden cameras that Street wears and are written up in his final report.

In one case, Street improvised a pen test that he described as “the most evil thing I’d ever done.” While on an engagement in Kingston, Jamaica, a security-conscious multinational financial firm challenged Street to get into its headquarters building, which it claimed was as secure as Fort Knox. By reviewing e-mail addresses and using a network-scanning tool, Street determined that the headquarters and its charity arm — located across the street — used the same computer network. Street contacted the firm’s charity posing as an American television producer making a documentary about corporations doing charity in the community. Explaining that he was flying back to the United States the next morning, he was able to secure a meeting with an executive at the charity without providing verification information.

During the meeting, Street volunteered to show the executive videos of his alleged documentaries, which required him to put a thumb drive in the executive’s computer. The thumb drive was a “rubber ducky” — a USB processor that’s a “keyboard human interface device” that allows it to be automatically detected and accepted by a computer. This allowed Street to later obtain access to the headquarters building’s computer network. Street later prepared an after-action report documenting how he had compromised the security protocols, listing what steps its managers should implement to prevent a similar attack (essentially segmenting the charity network from the corporate network), and detailing what the incurred costs would be for the financial firm.

What I accomplished incidentally, and what Street does professionally, demonstrates how your “secure” building is not nearly as secure as you might imagine. Recognizing this should compel the building’s management and security leaders to prioritize what it is they are most concerned about protecting and who poses the most likely threat, and to adjust their defenses accordingly.