Wake up, make breakfast, get the kids to school, drive to work, break into the chief financial officer’s inbox and steal the entire company’s employee tax records. Maybe later you’ll grab a bagel from across the street.

For “red teams” — or offensive security researchers — it’s just another day at work.

These offensive security teams are made up of skilled hackers who are authorized to find vulnerabilities in a company’s systems, networks and their employees. By hacking a company from within, the company can better understand where it needs to shore up its defenses to help prevent a real future hacker. But social engineering, where hackers manipulate their targets, can have serious consequences on the target. Although red team engagements are authorized and are legal, the ethics of certain attacks and efforts can go unconsidered.

Newly released research looks at the ethics involved in offensive security engagements. Is it ethically acceptable to send phishing emails, bribe a receptionist or plant compromising documents on a person’s computer if it means preventing a breach down the line?

The findings showed that security professionals, like red teamers and incident responders, were more likely to find it ethically acceptable to conduct certain kinds of hacking activities on other people than they are with having those activities run against themselves.

The research — a survey of more than 500 people working in both security and non-security positions, presented for the first time at ShmooCon 2020 in Washington, DC this week — found that non-security professionals, such as employees working in legal, human resources or at the reception desk, are nine-times more likely to object to receiving a phishing email as part of a red team engagement than a security professional, such as a red teamer or incident response.

It is hoped the findings will help start a discussion about the effects of a red team’s engagement on a company’s morale during an internal penetration test, and help companies to help understand the limits of a red team’s rules of engagement.

“When red teamers are forced to confront the fact that their targets are just like themselves, their attitude about what it’s OK to do to another person about testing security on other people changes dramatically after they confront the fact that it could happen to them,” said Tarah Wheeler, a cybersecurity policy fellow at New America and co-author of the research.

The survey asked about a range of potential tactics in offensive security testing, such as phishing, bribery, threats and impersonation. The respondents were randomly assigned one of two surveys containing all the same questions, except one asked if it was acceptable to conduct the activity and the other asked if it was acceptable if it happened to them.

The findings showed security professionals would object as much as four-times if certain tactics were used against them, such as phishing emails and planting compromising documents.

“Humans are bad at being objective,” said Wheeler.

The findings come at a time where red teams are increasingly making headlines for their activities as part of engagements. Just this week, two offensive security researchers at Coalfire had charges dropped against them for breaking into an Iowa courthouse as part of a red team engagement. The researchers were tasked and authorized by Iowa’s judicial arm to find vulnerabilities in its buildings and computer networks in an effort to improve its security. But the local sheriff caught the pair and objected to their activities, despite presenting a “get out of jail free” letter detailing the authorized engagement. The case gave a rare glimpse into the world of security penetration testing and red teaming, even if the arrests were universally panned by the security community.

The survey also found that security professionals in different parts of the world were more averse to certain activities than others. Security professionals in Central and South America, for example, object more to planting compromising documents whereas those in the Middle East and Africa object more to bribes and threats.

The authors of the research said that the takeaways are not that red teams should avoid certain offensive security practices but to be aware of the impact they can have on the targets, often which include their corporate colleagues.

“When you’re setting up a red team and scoping your targets, consider the impact on your co-workers and clients,” said Roy Iversen, director of security engineering and operations at Fortalice Solutions, who also co-authored the research. Iversen said the findings may also help companies decide if they want an outside red team to carry out an engagement to minimize any internal conflict between a company’s internal red team and the wider staff.

The researchers plan to expand their work over the next year to improve their overall survey count and to better understand the demographics of their respondents to help refine the findings.

“It’s an ongoing project,” said Wheeler.