OWASP AppSec California is one of my favorite security conferences: the talks are great, attendees are friendly, and it takes place right next to the beach in Santa Monica. Not too shabby 😎

One problem I always have, though, is that there are some great talks on the schedule that I end up missing.

So this year I decided to go back and watch all 44 talks from last year’s con, AppSec Cali 2019, and write a detailed summary of their key points.

If I had realized how much time and effort this was going to be at the beginning I probably wouldn’t have done it, but by the time I realized that this endeavor would take hundreds of hours, I was already too deep into it to quit 😅

Attending AppSec Cali 2020

If you’re attending AppSec Cali this year come say hi! I’m giving a talk and would be happy to chat about all things security.

What’s in this Post

This post is structured as follows:

Stats: Some high level stats and trends- which talk categories were most popular? Which companies gave the most talks?

Overview of Talks: A quick rundown of every talk in a few lines each, so you can quickly skim them and find the talks that are most directly relevant to you.

Summaries: detailed summaries of each talk, grouped by category.

Note the navigation bar on the left hand side, which will enable you to quickly jump to any talk.

Feedback Welcomed!

If you’re one of the speakers and I’ve left out something important, please let me know! I’m happy to update this. Also, feel free to let me know about any spelling or grammar errors or broken links.



If you find DevSecOps / scaling really interesting, I’d love to chat about what you do at your company / any tips and tricks you’ve found useful. Hit me up on Twitter, LinkedIn, or email.

Stats

In total, AppSec Cali 2019 had 44 talks that were a combined ~31.5 hours of video.

Here are the talks grouped by the category that I believed was most fitting:

Not too much of a surprise here: you’d expect defense (blue team) talks to be emphasized at an OWASP conference, as was web security.

We can also see that containers and Kubernetes were fairly popular topics (3).

Some things I found surprising were how many talks there were on threat modeling (4) and account security (4), and how there were only 3 primarily cloud security-focused talks. Perhaps the biggest surprise was that there were 3 talks on securing third-party code, with Slack discussing the steps they took to evaluate Slack bots and Salesforce discussing the review process on their AppExchange.

Here we see Netflix crushing it: they had presence on a panel, gave one of the keynotes, and collectively had 3 other talks. And of these 5 talks, 3 made my top 10 list. Not too shabby 👍

In second place, we see Segment coming in strong!

Netflix, Segment, and Dropbox were on at least one panel, while the rest of the companies listed had separate talks.









Overview of Talks

For your ease of navigation, this section groups all of the talks by category, gives a high description of what they’re about, and provides a link to jump right to their summary.

Note: the talks in each category are listed in alphabetical order, not in my order of preference.

My Top 10 Talks

This section lists my top 10 favorite talks from AppSec Cali 2019 ❤️

It was incredibly difficult narrowing it down to just 10, as there were so many good talks. All of these talks were selected because they are information-dense with detailed, actionable insights. I guarantee you’ll learn something useful from them.

A​ Pragmatic Approach for Internal Security Partnerships

Scott Behrens, Senior AppSec Engineer, Netflix

Esha Kanekar, Senior Security Technical Program Manager, Netflix

How the Netflix AppSec team scales their security efforts via secure defaults, tooling, automation, and long term relationships with engineering teams.

A Seat at the Table

Adam Shostack, President, Shostack & Associates

By having a “seat at the table” during the early phases of software development, the security team can more effectively influence its design. Adam describes how security can earn its seat at the table by using the right tools, adapting to what’s needed by the current project, and the soft skills that will increase your likelihood of success.

Cyber Insurance: A Primer for Infosec

Nicole Becher, Director of Information Security & Risk Management, S&P Global Platts

A lovely jaunt through the history of the insurance industry, the insurance industry today (terminology you need to know, types of players), where cyber insurance is today and where it’s headed, example cyber insurance policies and what you need to look out for.

(in)Secure Development - Why some product teams are great and others aren’t…

Koen Hendrix, InfoSec Dev Manager, Riot Games

Koen describes analyzing the security maturity of Riot product teams, measuring that maturity’s impact quantitatively using bug bounty data, and discusses 1 lightweight prompt that can be added into the sprint planning process to prime developers about security.

Lessons Learned from the DevSecOps Trenches



Clint Gibler, Research Director, NCC Group



Dev Akhawe, Director of Security Engineering, Dropbox



Doug DePerry, Director of Product Security, Datadog



Divya Dwarakanath, Security Engineering Manager, Snap



John Heasman, Deputy CISO, DocuSign



Astha Singhal, AppSec Engineering Manager, Netflix



Learn how Netflix, Dropbox, Datadog, Snap, and DocuSign think about security. A masterclass in DevSecOps and modern AppSec best practices.





Netflix’s Layered Approach to Reducing Risk of Credential Compromise

Will Bengston, Senior Security Engineer, Netflix

Travis McPeak, Senior Security Engineer, Netflix

An overview of efforts Netflix has undertaken to scale their cloud security, including segmenting their environment, removing static keys, auto-least privilege of AWS permissions, extensive tooling for dev UX (e.g. using AWS credentials), anomaly detection, preventing AWS creds from being used off-instance, and some future plans.

Starting Strength for AppSec: What Mark Rippetoe can Teach You About Building AppSec Muscles

Fredrick “Flee” Lee, Head Of Information Security, Square

Excellent, practical and actionable guidance on building an AppSec program, from the fundamentals (code reviews, secure code training, threat modeling), to prioritizing your efforts, the appropriate use of automation, and common pitfalls to avoid.

The Call is Coming From Inside the House: Lessons in Securing Internal Apps

Hongyi Hu, Product Security Lead, Dropbox

A masterclass in the thought process behind and technical details of building scalable defenses; in this case, a proxy to protect heterogenous internal web applications.

Startup Security: Starting a Security Program at a Startup

Evan Johnson, Senior Security Engineer, Cloudflare

What it’s like being the first security hire at a startup, how to be successful (relationships, security culture, compromise and continuous improvement), what should inform your priorities, where to focus to make an immediate impact, and time sinks to avoid.

Working with Developers for Fun and Progress

Leif Dreizler, Senior AppSec Engineer, Segment

Resources that have influenced Segment’s security program (talks, books, and quotes), and practical, real-world tested advice on how to: build a security team and program, do effective security training, successfully implement a security vendor, and the value of temporarily embedding a security engineer in a dev team.

Account Security

Automated Account Takeover: The Rise of Single Request Attacks

Kevin Gosschalk, Founder and CEO, Arkose Labs

Defines “single request attacks,” describes challenges of preventing account takeovers, gives examples of the types of systems bots attack in the wild and how, and recommendations for preventing account takeovers.

Browser fingerprints for a more secure web

Julien Sobrier, Lead Security Product Owner, Salesforce

Ping Yan, Research Scientist, Salesforce

How Salesforce uses browser fingerprinting to protect users from having their accounts compromised. Their goal is to detect sessions being stolen, including by malware running on the same device as the victim (and thus has the same IP address).

Contact Center Authentication

Kelley Robinson, Dev Advocate, Account Security, Twilio

Kelley describes her experiences calling in to 30 different company’s call centers: what info they requested to authenticate her, what they did well, what they did poorly, and recommendations for designing more secure call center authentication protocols.

Leveraging Users’ Engagement to Improve Account Security

Amine Kamel, Head of Security, Pinterest

Pinterest describes how it protects users who have had their credentials leaked in third-party breaches using a combination of programmatic and user-driven actions.

Blue Team

CISO Panel: Baking Security Into the SDLC

Richard Greenberg, Global Board of Directors, OWASP

Coleen Coolidge, Head of Security, Segment

Martin Mazor, Senior VP and CISO, Entertainment Partners

Bruce Phillips, SVP & CISO, Williston Financial

Shyama Rose, Chief Information Security Officer, Avant

Five CISOs share their perspectives on baking security into the SDLC, DevSecOps, security testing (DAST/SAST/bug bounty/pen testing), security training and more.

It depends…

Kristen Pascale, Principal Techn. Program Manager, Dell EMC

Tania Ward, Consultant Program Manager, Dell

What a PSIRT team is, Dell’s PSIRT team’s workflow, common chalenges, and how PSIRT teams can work earlier in the SDLC with development teams to develop more secure applications.

On the Frontlines: Securing a Major Cryptocurrency Exchange

Neil Smithline, Security Architect, Circle

Neil provides an overview of cryptocurrencies and cryptocurrency exchanges, the attacks exchanges face at the application layer, on wallets, user accounts, and on the currencies themselves, as well as they defenses they’ve put in place to mitigate them.

The Art of Vulnerability Management

Alexandra Nassar, Senior Technical Program Manager, Medallia

Harshil Parikh, Director of Security, Medallia

How to create a positive vulnerability management culture and process that works for engineers and the security team.

Cloud Security

Cloud Forensics: Putting The Bits Back Together

Brandon Sherman, Cloud Security Tech Lead, Twilio

An experiment in AWS forensics (e.g. Does the EBS volume type or instance type matter when recovering data?), advice on chain of custody and cloud security best practices.

Detecting Credential Compromise in AWS

Will Bengston, Senior Security Engineer, Netflix

How to detect when your AWS instance credentials have been compromised and are used outside of your environment, and how to prevent them from being stolen in the first place.

Containers / Kubernetes

Authorization in the Micro Services World with Kubernetes, ISTIO and Open Policy Agent

Sitaraman Lakshminarayanan, Senior Security Architect, Pure Storage

The history of authz implementation approaches, the value of externalizing authz from code, authz in Kubernetes, and the power of using Open Policy Agent (OPA) for authz with Kubernetes and ISTIO.

Can Kubernetes Keep a Secret?

Omer Levi Hevroni, DevSecOps Engineer, Soluto

Omer describes his quest to find a secrets management solution that supports GitOps workflows, is Kubernetes native, and has strong security properties, which lead to the development of a new tool, Kamus.

How to Lose a Container in 10 Minutes

Sarah Young, Azure Security Architect, Microsoft

Container and Kubernetes best practices, insecure defaults to watch out for, and what happens when you do everything wrong and make your container or cluster publicly available on the Internet.

Keynotes

Fail, Learn, Fix

Bryan Payne, Director of Engineering, Product & Application Security, Netflix

A discussion of the history and evolution of the electrical, computer, and security industries, and how the way forward for security is a) sharing knowledge and failures and b) creating standard security patterns that devs can easily apply, raising the security bar at many companies, rather than improvements helping just one company.

How to Slay a Dragon

Adrienne Porter Felt, Chrome Engineer & Manager, Google

Solving hard security problems in the real world usually requires making tough tradeoffs. Adrienne gives 3 steps to tackle these hard problems and gives examples from her work on the Chrome security team, including site isolation, Chrome security indicators (HTTP/s padlock icons), and displaying URLs.

The Unabridged History of Application Security

Jim Manico, Founder, Manicode Security

Jim gives a fun and engaging history of computer security, including the history of security testing, OWASP projects, and XSS, important dates in AppSec, and the future of AppSec.

Misc

How to Start a Cyber War: Lessons from Brussels-EU Cyber Warfare Exercises

Christina Kubecka, CEO, HypaSec

Lessons learned from running EU diplomats through several realistic cyber warfare-type scenarios, and a fascinating discussion of the interactions between technology, computer security, economics, and geopolitics.

Securing Third-Party Code

Behind the Scenes: Securing In-House Execution of Unsafe Third-Party Executables

Mukul Khullar, Staff Security Engineer, LinkedIn

Best practices for securely running unsafe third-party executables: understand and profile the application, harden your application (input validation, examine magic bytes), secure the processing pipeline (sandboxing, secure network design).

Securing Third Party Applications at Scale

Ryan Flood, Manager of ProdSec, Salesforce

Prashanth Kannan, Product Security Engineer, Salesforce

The process, methodology, and tools Salesforce uses to secure third-party apps on its AppExchange.

Slack App Security: Securing your Workspaces from a Bot Uprising

Kelly Ann, Security Engineer, Slack

Nikki Brandt, Staff Security Engineer, Slack

An overview of the fundamental challenges in securing Slack apps and the App Directory, the steps Slack is taking now, and what Slack is planning to do in the future.

Security Tooling

BoMs Away - Why Everyone Should Have a BoM

Steve Springett, Senior Security Architect, ServiceNow

Steve describes the various use cases of a software bill-of-materials (BOM), including facilitating accurate vulnerability and other supply-chain risk analysis, and gives a demo of OWASP Dependency-Track, an open source supply chain component analysis platform.

Endpoint Finder: A static analysis tool to find web endpoints

Olivier Arteau, Desjardins

A new tool to extract endpoints defined in JavaScript by analyzing its Abstract Syntax Tree.

Pose a Threat: How Perceptual Analysis Helps Bug Hunters

Rob Ragan, Partner, Bishop Fox

Oscar Salazar, Managing Security Associate, Bishop Fox

How to get faster, more complete external attack surface coverage by automatically clustering exposed web apps by visual similarity.

The White Hat’s Advantage: Open-source OWASP tools to aid in penetration testing coverage

Vincent Hopson, Field Applications Engineer, CodeDx

How two OWASP tools can make penetration testers more effective and demos using them. Attack Surface Detector extracts web app routes using static analysis and Code Pulse instruments Java or .NET apps to show your testing coverage.

Usable Security Tooling - Creating Accessible Security Testing with ZAP

David Scrobonia, Security Engineer, Segment

An overview and demo of ZAP’s new heads-up display (HUD), an intuitive and awesome way to view OWASP ZAP info and use ZAP functionality from within your browser on the page you’re testing.

Threat Modeling

Game On! Adding Privacy to Threat Modeling

Adam Shostack, President, Shostack & Associates

Mark Vinkovits, Manager, AppSec, LogMeIn

Adam Shostack and Mark Vinkovits describe the Elevation of Privilege card game, built to make learning and doing threat modelling fun, and how it’s been extended to include privacy.

Offensive Threat Models Against the Supply Chain

Tony UcedaVelez, CEO, VerSprite

The economic and geopolitical impacts of supply chain attacks, a walkthrough of supply chain threat modeling from a manufacturer’s perspective, and tips and best practices in threat modeling your supply chain.

Threat Model Every Story: Practical Continuous Threat Modeling Work for Your Team

Izar Tarandach, Lead Product Security Architect, Autodesk

Attributes required by threat modelling approaches in order to succeed in Agile dev environments, how to build an organization that continuously threat models new stories, how to educate devs and raise security awareness, and PyTM, a tool that lets you express TMs via Python code and output data flow diagrams, sequence diagras, and reports.

Web Security

An Attacker’s View of Serverless and GraphQL Apps

Abhay Bhargav, CTO, we45

An overview of functions-as-a-service (FaaS) and GraphQL, relevant security considerations and attacks, and a number of demos.

Building Cloud-Native Security for Apps and APIs with NGINX

Stepan Ilyin, Co-founder, Wallarm

How NGINX modules and other tools can be combined to give you a nice dashboard of live malicious traffic, automatic alerts, block attacks and likely bots, and more.

Cache Me If You Can: Messing with Web Caching

Louis Dion-Marcil, Information Security Analyst, Mandiant

Three web cache related attacks are discussed in detail: cache deception, edge side includes, and cache poisoning.

Inducing Amnesia in Browsers: the Clear Site Data Header

Caleb Queern, Cyber Security Servicies Director, KPMG

Websites can use the new Clear-Site-Data HTTP header to control the data its users store in their browser.

Node.js and NPM Ecosystem: What are the Security Stakes?

Vladimir de Turckheim, Software Engineer, Sqreen

JavaScript vulnerability examples (SQLi, ReDoS, object injection), ecosystem attacks (e.g. ESLint backdoored), best practice recommendations.

Preventing Mobile App and API Abuse

Skip Hovsmith, Principal Engineer, CriticalBlue

An overview of the mobile and API security cat and mouse game (securely storing secrets, TLS, cert pinning, bypassing protections via decompiling apps and hooking key functionality, OAuth2, etc.), described through an example back and forth between a package delivery service company and an attacker-run website trying to exploit it.

Phew, that was a lot. Let’s get into it!









My Top 10 Talks

A​ Pragmatic Approach for Internal Security Partnerships

Scott Behrens, Senior AppSec Engineer, Netflix

Esha Kanekar, Senior Security Technical Program Manager, Netflix

abstract slides video

How the Netflix AppSec team scales their security efforts via secure defaults, tooling, automation, and long term relationships with engineering teams.

Check Out This Talk

This is one of the best talks I’ve seen in the past few years on building a scalable, effective AppSec program that systematically raises a company’s security bar and reduces risk over time. Highly, highly recommended 💯

The Early Days of Security at Netflix

In the beginning, the Netflix AppSec team would pick a random application, find a bunch of bugs in it, write up a report, then kick it over the wall to the relevant engineering manager and ask them to fix it. Their reports had no strategic recommendation section, just a list of boilerplate recommendations/impact/description of vulns. Essentially, they were operating like an internal security consulting shop.

This approach was not effective - vulns often wouldn’t get fixed and the way they operated caused relationships with dev teams to be adversarial and transactional. They were failing to build long term relationships with product and application teams. They were focusing on fixing individual bugs rather than strategic security improvements.

Further, dev teams would receive “high priority” requests from different security teams within Netflix, which is frustrating, as it was unclear to them how to relatively prioritize the security asks and the amount of work was intractable.

Enabling Security via Internal Strategic Partnership

Now, the AppSec team aims to build strong, long term trust-based relationships with dev teams.

They work closely with app and product teams to assess their security posture and identify and document investment areas: strategic initiatives that may take a few quarters, not just give dev teams a list of vulns.

Security Paved Road

The Netflix AppSec team invests heavily in building a security paved road: a set of libraries, tools, and self-service applications that enable developers to be both more productive and more secure.

For example, the standard authentication library is not only hardened, but it also is easy to debug and use, has great logging and gives devs (and security) insight into who’s using their app, and in general enables better support of customers who are having issues, whether it’s security related or not.

As part of our Security Paved Road framework, we focus primarily on providing security assurances and less on vulns.

Tooling and Automation

The Netflix AppSec team uses tooling and automation in vuln identification and management as well as application inventory and risk classification.

This information give them valuable data and context, for example, to inform the conversation when they’re meeting with a dev team, as they’ll understand the app’s risk context, purpose, etc. and be able to make better recommendations.

Intuition and organizational context are still used though. They’re data informed, not solely data driven.

Netflix’s 5-Step Approach to Partnerships

1. Engagement identification

Identify areas of investment based on factors like enterprise risk, business criticality, sensitivity of data being handled, bug bounty submission volume, overall engineering impact on the Netflix ecosystem, etc.

Ensure that the application team is willing to partner.

2. Discovery meeting(s)

Basically a kick off and deep dive meeting with the app team.

What are some of their securiity concerns? What keeps them up at time?

Set context with stakeholders: we’re identifying a shared goal, not forcing them to a pre-chosen security bar.

Make sure security team’s understanding of the app built via automation aligns with what the dev team thinks.

3. Security review

Based on info collected via automation and meeting with the dev team, the security team now knows what security services should be performed on which app and part of their landscape. You don’t need to pen test or do a deep threat model of every app. This process can be more holistic, sometimes over a set of apps rather than a specific one.

Work with other security teams to collect their security asks for the dev team. This full list of security asks are then priorized by the security team and consolidated into a single security initiatives document.

4. Alignment on the Security Initiatives Doc

Discuss the security initiatives document with the dev team to ensure there is alignment on the asks and their associated priorities.

5. On-going relationship management and sync-ups

After aligning on the security asks, the dev teams make the initiatives part of theiri roadmap.

The sync ups are the key to maintain the long term relationship with partnering teams. Meetings may be bi-weekly or monthly and the security point of contact may join their all-hands, quarterly meeting plans, etc.

These meetings are not just to track that the security initiatives are on the app team’s roadmap, but also to ask if they have any blockers, questions, or concerns the security team can help with.

What’s going on in their world? What new things are coming up?

Automation Overview

Before going through a case study of this partnership process, let’s quickly cover some of the tooling and automation that enables Netflix’s AppSec team to scale their efforts.

Application Risk Rating: Penguin Shortbread

Penguin Shortbread performs an automated risk calculation for various entities including:

Finding all apps that have running instances and are Internet accessible, by org.

Can see which apps are using security controls, like SSO, app-to-app mTLS and secrets storage, and if they’re filled the security questionnaire (Zoltar).

The allows the security team to walk into a meeting and celebrate the team’s wins, “Hey, it looks like all of your high risk apps are using SSO, that’s great. You could buy down your risk even further by implementing secret storage in these other apps.”

Application Risk Calculator

This gives a ballpark understandinig of which apps the security team should probably look at first, based on properties like: is it Internet facing? Does it have an old OS or AMI? Which parts of the security paved road is it using? How many running instances are there? Is it running in a compliance-related AWS account like PCI?

Vulnerability Scanning: Scumblr

Scumblr was originally discussed by Netflix at AppSec USA 2016 (video) and was open sourced. It has since been changed heavily for internal use, and in general is used to run small, lightweight security checks against code bases or simple queries against running instances.

Security Guidance Questionnaire: Zoltar

Zoltar keeps track of the intended purpose for different apps and other aspects that are hard to capture automatically. As devs fill out the questionnaire, they’re given more tailored advice for the language and frameworks they’re using, enabling the dev team to focus on things that measurably buy down risk.

Case Study: Kicking Off a Partnership with the Content Engineering Team

Discovery Meeting Preparation: Understand Their World

The team they’re meeting with may have 100-150 apps. The security team uses automation to come to the meeting as informed as possible:

Risk scoring - Based on the data the apps handle, if they’re Internet facing, business criticality, etc.

Vulnerability history - From internal tools, bug bounty history, pen tests, etc.

Security guidance questionnaire - Questionnaire that the dev team fills out.

Discovery Meeting

The main point of meeting is to build trust and show that the security team has done their homework. Instead of coming in and asking for a bunch of info, the security team is able to leverage their tooling and automation to come in knowing a lot about the dev team’s context - their apps, the risk factors they face, and the likely high risk apps they should focus on. This builds a lot of rapport.

If app teams aren’t receptive, that’s OK, the security team will circle back later.

Security Review

Perform holistic threat modeling/security services

This is different than a normal securiity review, as 100-200 apps may be in scope. They threat model a team’s ecosystem, not just a single app. Because the security team is informed of the teamm’s apps annd risks, they can narrow the security help they provide and what they recommend in a customized way.

What controls can they use to buy down their biggest risks?

By investing in secure defaults and self-service tooling, the security team can focus on things that add more value by digging into the harder problems, rather than walking in and saying, “You need to use X library, you need to scan with Y tool,” etc.

Collect and prioritize context/asks from other Security Teams

So that the dev team doesn’t need to worry about the 6 - 12 security subteams. The various security teams align on which security asks they want the dev team to prioritize.

Document the security asks into a security initiatives doc

Put all of the asks and related useful meta info into a standalone doc for easier tracking.

Align on the Security Initiatives Doc

According to Esha, this step is the secret weapon that helps make the whole process so effective. The doc includes:

Executive Summary: Focuses on the good as well as the bad. Threats, paved road adoption, open vulns, and an overview on the strategic work they should kick off first to mitigate the most risk.

Partnership Details: Provides context on when the meetings will be as well as the dev point of contact.

Team Details: Summary of the team, what they do, who to reach out too, documentation/architecture diagrams, and a list of known applications/libraries.

Security Initiatives Matrix: A prioritized list of security work (from all security teams). They set a high level objective for each goal and define its impact to Netflix. They’ll provide the specific steps to reach that goal and include the priority, description, owner, status, timeline for delivery, and an option for tracking the work in Jira.



During the meeting, security emphasizes what is the goal? It’s not just about fixing bugs, it’s about setting a long term strategic path to reduce risk.

Example security initiatives document

On-going Syncs

During these syncs, the security team’s goals are to build trust and show value, as well as talk about upcoming work and projects the dev team has on their plate.

The security team member ensures work is getting prioritized quarter over quarter and helps the dev team get connected with the right team if they’re hitting roadblocks when working on the security asks.

Scaling Partnerships - Security Brain

Security brain is the customer facing version of all of the security tooling. It presents dev teams, in a single view, the risk automatically assigned to each app, the vulns currently open, and the most impactful security controls/best practices that should be implemented.

While other security tools present a broader variety of information and more detail, Security Brain is purposefully focused on just the biggest “need to know” items that dev teams should care about right now.

Risks With Our Security Approach

Netflix’s approach isn’t perfect:

There’s a heavy emphasis on the paved road, but not all apps use it, so they have limited visibility there.

The current automated risk scoring metrics are a bit arbitrary and could be improved.

They don’t yet have an easy way to push out notifications if there’s a new control they want existing partner teams to adopt.

Evolving AppSec at Netflix: In Progress

Asset Inventory

Asset Inventory provides a way to navigate and query relationships between disparate infrastructure data sources

This includes not just code artifacts, but AWS accounts, IAM, load balancer info, and anything else related to an app, as well ownership and team information.

The Netflix AppSec team is working on creating an authoritative application inventory, which will enable them to better measure security paved road adoption, improve their self-service model and validation of controls, and find where they don’t have visibility.

Prism

Prism builds on top of the asset inventory as a paved road measurement and validation risk scoring system. It will give them the ability to recommend, validate, and measure paved road adoption practices prioritized by application risks and needs.

Prism will enable the security team to quickly ask questions like, “OK, this team has some apps that acces PII. Do they have any apps without logging? Show me all of their apps written in Java.”

Benefits of Prism include:

Faster IR response and triage time (can determine who owns an app quicker)

More mature risk calculation and scoring

Assist in scaling partnerships by increasing self servicing

We’ve discovered that focusing on the controls that buy down risk with automation is a lot easier than finding bugs with automation. That’s where we’re putting our emphasis right now.

Future Plans

Over time the Netflix AppSec team wants to grow investment in Secure by Default (Paved Road) efforts, as they tend to be high leverage, high impact, and excellent for devs - devs get a lot of value for free.

Not all security controls can be automated, so making self-service security easier to use is also valuable.

Security partnerships will always be valuable, as there are aspects and context that secure defaults and self-service tooling will never be able to handle. As more of the security team’s job is handled by widespread baseline security control adoption and self-service tooling, they’ll be able to provide even more value in their partnerships.

Stats: This Approach Works

The Netflix AppSec team reviews all of the critical risk vulns they had over the past 3 years and this is what they found:

20% could have lowered their risk to High if paved road authentication controls had been adopted.

could have lowered their risk to High if paved road authentication controls had been adopted. 12% could have been prevented or detected with third-party vulnerability scanning (See Aladdin Almubayed’s BlackHat 2019 talk).

could have been prevented or detected with third-party vulnerability scanning (See Aladdin Almubayed’s BlackHat 2019 talk). 32% would not have been found with automation or self-service, so security partnerships are an important tool to reduce risk.

They also found they used only 33% of their projected bug bounty spend, which they had set aside based on industry standards, so it appears that they are headed in the right direction.

Maturity Framework for Partnerships

Note that this is the maturity level of the security team’s partnership with the dev team, not the security maturity of the dev team

Quick Wins

Determine your application risk scoring model

How do you determine which apps are high vs low risk? Consider using factors like if they’re Internet facing, the sensitivity of the data they interact with, programming language used, and compliance requirements.

Identify teams/orgs to partner with

Consider policies and compliance requirements, business criticality, etc.

Create an application inventory

Automate as much of it as possible so that you’re not constantly maintaining it.

Then, leverage this info in kicking off partnership discussions, consolidate and prioritize the security asks for dev teams, and create an easy to read and track security initiatives doc.

During your ongoing syncs with teams, ask, “What can we do for you?” and “How can we help you?”

Key Takeaways

Use tooling and automation to make data informed (but not wholly driven) decisions. Leverage the understanding you have about the app team’s ecosystem, their world, historical challenges they’ve had, and your knowledge of the relevant business context.

Give dev teams a single security point of contact to make communicating with them and answering their questions easier and less frustrating for them. This in turn helps build a long term trust based relationship with the partnering teams.

A Seat at the Table

Adam Shostack, President, Shostack & Associates

abstract slides video

By having a “seat at the table” during the early phases of software development, the security team can more effectively influence its design. Adam describes how security can earn its seat at the table by using the right tools, adapting to what’s needed by the current project, and the soft skills that will increase your likelihood of success.

At a high level, there are two phases in software development.

First, there is fluid dialogue about an idea, where things are fluid, not fixed. Here we’re building prototypes and doing experiments, exploring approach ideas and their consequence, asking questions like “What if…” and “How about…”

As things get hammered out we move to discussion, where we’ve decided on many of the details so things are more fixed than fluid, we’ve committed to an idea, and are working towards production code.

A common challenge is that the security team is only involved in the discussion phase, after many important decisions have already been made, like the technologies in use, how the system will be architected, and how everything will fit together. At this point, any suggestions (or demands) from the security team to make significant changes to the tech used or overall architecture will be met with resistance, as these decisions have already been made and will set the project back.

Security needs a seat at the table, so we can provide input during the design phase.

But seating is limited at the table. There are already a number of parties there, like the QA team, IT, users, engineering, etc. Everyone wants a seat. However, studies have shown that as team sizes grow larger it becomes more difficult to build consensus and thus make progress, so there’s motivation to keep the table small.

Today, security often doesn’t get a place at the table. If all you ever say at planning meetings regardless of what is proposed, “That would be insecure” or “We’ll run a vuln scan / SAST / fuzzing,” then developers will think, “OK great, I know what security is going to say, so we don’t need them here in this meeting.”

Just like friends don’t let friends do meth, friends don’t let friends send developers 1,000 page SAST scan reports.

What’s Needed For A Seat At The Table?

Tools that work in dialogue - Tools need to work when things are fluid not fixed.

- Tools need to work when things are fluid not fixed. Consistency - The same problems or challenges should get the same solution recommendations. Too often developers will get different advice from different security people, which is confusing and makes it hard foir them to do their jobs effectively.

- The same problems or challenges should get the same solution recommendations. Too often developers will get different advice from different security people, which is confusing and makes it hard foir them to do their jobs effectively. Soft skills! - At a small table, if someone doesn’t play well with others, they don’t get invited back.

Threat Modeling as a Design Toolkit

Structure Allows Consistency

Threat modeling can help us get a seat at the table, and aving a consistent structure and approach can make threat modeling much more successful.

The threat model can be created in some design software or done informally on a whiteboard.

When discussing what can go wrong, frameworks like STRIDE and kill chains provide structure to the brainstorming so we’ll be able to answer the question in a consistent, repeatable way, and come to similar conclusions.

By discussing what we’re going to do about these threats, we can start planning the security controls, arcchitecture decisions, etc. before a single line of code is written, rather than trying to use duct tape later.

Reflect on the threat modeling process afterwards. Did we get value out of this? What do we keep doing? What should we do differently next time? Like your development processes, how your company threat models will evolve over time to best fit your company’s unique environment.

Threat Modeling Is A Big Tent

Like developing software, the process of threat modeling can vary significantly: it can be lightweight, agile, fast, and low effort, or big, complicated, and slow. Which one makes sense depends on the project you’re on. Similarly, there are also different tools and deliverables that can be involved.

Think of threat modeling as being composed of building blocks, like how a software service can be composed of many microservices. If you think that threat modeling can only be done one way, like it’s a big monolith that cannot be decomposed into smaller parts, then you’ll lose the value of being able to take advantage of the building blocks that fit with your team’s needs.

Soft Skills

Soft skills are crucial in security.

While security people might like making jokes like this, this damages rapport with developers and doesn’t help us get our jobs done.

You might feel like soft skills feel “unnatural.” That’s OK, everything we do starts that way! When you first started programming, did writing code feel natural? Probably not. Soft skills, like anything, are skills we need to learn and practice, by doing them.

Here are a few critical soft skills.

Respect

Pay attention to the person speaking In meetings and informally during discussions. Don’t interrupt, read your email, or have side conversations. This conveys that you don’t value what the person is saying.

Pay attention to the people not speaking Are we giving everyone the opportunity to speak? Oftentimes there are people who are very vocal and loud who can drown out other people. Everyone has something to add, let their voice be heard.

Active Listening

Pay attention, and show that you’re listening with your body language and gestures. Let people finish what they’re saying, don’t just hear the first 10 words and then interrupt, telling them how their idea won’t work. One effective structure, that will feel unnatural at first is:

I hear you saying [reflect back what they told you]…

Assume Good Intent

No one is paid to make your life harder. Everyone is just trying to get their job done, whether it’s shipping new features, designing the product, marketing it, etc. Instead of thinking they’re dumb or uninformed, instead ask yourself:

What beliefs might they have that would lead them to act or feel this way?

Everyone’s behavior makes sense within the context of their beliefs.

Diversity

Adam believes diversity has intrinsic value, as it allows you to take advantage of all of the skills, aptitudes, knowledge, and backgrounds that can make your company successful.

However he’s found that you tend to make better progress with executives by making the business case for diversity. Rather than promoting diversity for its intrinsic value, instead make the argument that it will help the business, for example, by referencing studies that show that diverse teams are more effective, by having a more broadly representative team you can better connect with your diverse user base, and the types of behaviors and environments that support diversity (e.g. being welcoming and supportive), also make your team or company a more attractive place to work, making it easier to hire and retain top talent. Conversely, having a culture that drives non traditional candidates away is probably not an environment that people want to be in and will likely cause challenges when you need to interface with other teams in the company.

Questions

What do you do if your company had a bad event and it caused people to keep coming to the security team for help and it’s overwhelmed your team?

This is a great opportunity to train developers how to threat model so they can start to stand their own, looping in the security team in harder cases as needed.

How do we know if we did a “good job”?

There’s basically two types of metrics, mechanical and qualitative. For mechanical, you can ask measurable questions like, “Do we have a diagram? Did we find threats against this service? Did we file user stories or acceptance tests for each of the things we found?”

On the qualitative side, during retrospectives, you can ask questions like, “Are we happy with the time we spent on threat modeling? Do we feel it paid off well?”

How do you make the business case for giving developers more secure coding training?

Without secure coding training, developers are more likely to introduce vulnerabilities into the software they write. Once this code has been tested, delivered, and is in production, potentially with other components that rely on it, it’s very expensive to go back and fix it.

By having developers write secure software in the first place, you can limit the amount of rework that has to be done, which improves the predictability of shipping new features. You’re reducing the likelihood that you’ll discover new problems days, weeks, or months later and have to interrupt what you’re currently working on to fix them, at which time the developer who introduced the issue may have forgotten most of the relevant context.

Metrics can also be really valuable here. Track the number and types of vulnerabilities you’re discovering in various code bases so you can show that after the training, your SAST / DAST tools or pen tests are finding fewer issues, which is allowing you to spend more time building new features and less time fixing issues. See Data-Driven Bug Bounty for more ideas on leveraging vulnerability data to drive AppSec programs.

Cyber Insurance: A Primer for Infosec

Nicole Becher, Director of Information Security & Risk Management, S&P Global Platts

abstract slides video

This talk is a really fun and info-dense whirlwhind tour of cyber insurance. Frankly, there’s too much good content for me to cover here, so I’ll do my best at providing an overview of the content Nicole covers with a few of the key points.

Nicole gave this talk because the cyber insurance industry is growing rapidly and at some point, we in the infosec community are going to have to be involved, so she wants to describe the key terminology and context we need to be reasonably informed.

Insurance is a mechanism individuals or organications use to limit their exposure to risk. Individuals band together to form groups that pay for losses. By forming groups, the risk is spread and no individual is fully exposed.

Nicole gives a quick history of the insurance industry, from Hammurabi, medieval guilds, Pascal’s tables (which led to actuarial tables, underwriting, and affordable insurance) to Ben Franklin.

The insurance industry has evolved over time, based on new technology and risks; for example, fire insurance after the great fire of London, automobile insurance once cars became widespread, and now cyber insurance.

Insurance Industry Today

There are 3 major market participants:

Brokers / Agents: Act as middlemen between the insurance buyer and the carrier. Must be licensed and regulated. They develop the sales infrastructure needed to sell insurance on behalf of the carrier. Carriers: The company that holds the insurance policy; they collect premiums and are liable for a covered claim. They pool the risk of a large number of policy holders by paying out relatively few claims while collecting premuims from the majority of policyholders who don’t file claims over the same period. Reinsurers: Insurance purchased by insurance carriers to mitigate the risk of sustaining a large loss. The carriers esll of portioins of their portfolio to a reinsurer that aggregates the risk at a higher level. This spreading of risk enables an individual insurance company to take on clients whose coverage would be too much of a burden for a single insurance company to handle alone.

Reinsurance blew my mind at first, but it makes sense.

Nicole walks through several types of insurance companies, including standard lines, excess lines, captives, direct seller,s domestic/alien, Lloyds of London, mutual companies, and stock companies.

Cyber Insurance - Background

The Cyber Insurance market is still early: only 15% of US companies have it and only 1% world-wide. As of 2016, it’s a $2.5B - $3.5B market and it’s estimated to be a $12B - $20B market by 2020.

A key distinction is differentiating between first party and third party insurance, both of which can be held by a company, individual, or group of individuals.

First party covers the policy holder against damages or losses to themselves or their property. Examples:

Breach notification

Credit monitoring services

PR campaign services

Compensating the business for lost income

Paying a ransom or extornist who holds data hostage

Third party protects the policy holder against liability for damages or losses they caused to a person or property. Examples:

Covers the people and businessses “responsible” for the systems that allowed a data breach to occur

Lawsuits relating to a data breach

Privacy liability

Technology errors & omissions

Writing and shipping vulnerable code/IoT

Key Terms

Coverage is the amount of risk or liability covered by a specific insurance policy, paid out up to a limit. A typical insurance policy is a collection of a series of coverages, each of which have their own sub-limit.

Exclusions define the types of risk that what will not be covered.

Important Note: coverages will typically specify whether it’s for first party or third party losses, and it’s critical to examine these terms.

Example Policies

Nicole then walks through a number of example policies composed of several coverage subcomponents, each having their own risk area and sub-limit. The examples are: incident response, cyber crime, system damage and business interruption, network security and privacy liability, media liability, technology errors and omissions, and court attendance costs.

Common Exclusions

Common exclusions that will not be covered by cyber insurance include: property damage or bodily injury due to security incidents, loss of IP, acts of war and terrorism (you’ve been hacked by a nation state), unlawful data collection (you collected data you shouldn’t have), failure to follow minimum security expectations which lead to a breach, there was a core Internet failure (e.g. in root DNS servers).

You need to negotiate exclusions. They are important and vary by carrier. The devil is in the details.

Nicole concludes with a number of challenges underwriters face, the people who evaluate risk and determine policy pricing, as well as some important legal tests of cyber insurance.

Can Cyber Insurance Help Align Incentives?

One point that Nicole made, that I thought was neat, was that hopefully cyber insurance will eventually to align economic incentives for security teams to do the right thing, not just because the security manager doesn’t want to get fired or have their company in the news. There have been a number of similar historical cases, like when homes had to be built to a fire-resistant code to be covered under the fire insurance Ben Franklin set up. Ideally, cyber insurance will be able to map risk to specific controls, which security teams can then use to justify headcount and budget, measurably improving their company’s security.

You can learn more and read some public cyber insurance polices in the SERFF Filling Acess system, an online electronic records system managed by the National Association of Insurance Commissioners (NAIC).

(in)Secure Development - Why some product teams are great and others aren’t…

Koen Hendrix, InfoSec Dev Manager, Riot Games

summary abstract slides video

Koen describes analyzing the security maturity of Riot product teams, measuring that maturity’s impact quantitatively using bug bounty data, and discusses 1 lightweight prompt that can be added into the sprint planning process to prime developers about security.

Security Maturity Levels

Based on observing how development teams discuss security and interact (or don’t) with the security team, Koen groups dev teams into 4 security maturity levels.

Teams at these maturity levels range from largely not thinking about security (Level 1), to having one or two security advocates (Level 2), to security being a consistent part of discussions but it’s not yet easy and natural (Level 3), to security consciousness being pervasive and ever-present (Level 4).

Measuring Impact of Security Maturity Level

To examine if a dev team’s level had a measurable impact on the security of the code bases they worked on, Koen analyzed Riot’s 2017 bug bounty data group by team maturity level. The differences were clear and significant.

Compared to teams at Level 1, teams at Levels 2-4 had:

A 20% / 35% / 45% reduced average bug cost

A 35% / 55% / 70% reduced average time to fix

The average issue severity found from internal testing was 30% / 35% / 42% lower

Level 1 - Absence Level 2 - Reactive Level 3 - Proactive Process Level 4 - Proactive Mindset Avg $ Per Bug $1 $0.8 $0.65 $0.55 Avg Time to Fix High Risk 1 0.65 0.45 0.3 Avg Issue Severity $1 $0.7 $0.65 $0.58 Avg $ Per Bug and Avg Time to Fix High Risk are fixed to $1 / 1 unit of time for Level 1 teams and Levels 2-4 are expressed in comparison to Level 1.

and are fixed to $1 / 1 unit of time for Level 1 teams and Levels 2-4 are expressed in comparison to Level 1. Avg Issue Severity - if bugs found through internal security reviews had been discovered through bug bounty, how expensive would they have been?

Prioritizing Security Investment

Riot Games chose to focus on raising Level 1 and 2 teams to Level 3, as that yields the biggest security benefits vs effort required, makes teams’ security processes self-sustaining without constant security team involvement, and makes them more accepting of future security tools and processes provided by the security team.

They did this by shaping development team behaviour, rather than purely focusing on automation and technical competencies and capabilities.

How to uplevel dev teams?

During standard sprint planning, dev teams now ask the following prompt and spend 2-3 minutes discussing it, recording the outcomes as part of the story in Jira/Trello/etc.:

How can a malicious user intentionally abuse this functionality? How can we prevent that?

Though the dev team may not think of every possible abuse case, this approach is highly scalable, as it primes devs to think about security continuously during design and development without the security team needing to attend every meeting (which is not feasible).

Final Thoughts

The security level of a team influences how the security team should interact with them. If the majority of your teams are Level 1 and 2, rolling out optional toolings and processes isn’t going to help. First, you need to level up how much they care about security.

Work with Level 3 and 4 teams when building new tooling to get early feedback and iterate to smooth out friction points before rolling the tooling out to the rest of the org.

Read the full summary here.

Lessons Learned from the DevSecOps Trenches

Clint Gibler, Research Director, NCC Group



Dev Akhawe, Director of Security Engineering, Dropbox



Doug DePerry, Director of Product Security, Datadog



Divya Dwarakanath, Security Engineering Manager, Snap



John Heasman, Deputy CISO, DocuSign



Astha Singhal, AppSec Engineering Manager, Netflix



summary abstract video

Learn how Netflix, Dropbox, Datadog, Snap, and DocuSign think about security. A masterclass in DevSecOps and modern AppSec best practices.

Great “Start Here” Resource for Modern AppSec / DevSecOps

When people ask me, “What’s on your shortlist of resources to quickly get up to speed about how to think about security and how to run a modern security program, this is one of the handful I share. Check out the full summary for this one, I bet you’ll be glad you did.

Though the security teams may have different names at different companies (e.g. AppSec vs ProdSec), they tend to have the same core responsibilities: developer security training, threat modeling and architecture reviews, triaging bug bounty reports, internal pen testing, and building security-relevant services, infrastructure, and secure-by-default libraries.

Commonalities

Everyone built their own internal continuous code scanning platforms that essentially run company-specific greps that look for things like hard-coded secrets, known anti-patterns, and enforcing that secure wrapper libraries are being used (e.g. crypto, secrets management).

SAST and DAST tools were generally not found to be useful due to having too many FPs, being too slow and not customizable, and failing to handle modern frameworks and tech (e.g. single page apps).

Everyone emphasized the important of building secure-by-default wrapper libraries and frameworks for devs to use, as this can prevent classes of vulnerabilities and keep you from getting caught up in vuln whack-a-mole.

This can be hard if you have a very polyglot environment but it’s worth it.

Determine where to invest resources by a) reviewing the classes of bugs your company has had historically and b) have conversations with dev teams to understand their day-to-day challenges.

Building relationships with engineering teams is essential to knowing relevant upcoming features and services, being able to advise engineering decisions at the outset, and spreading awareness and gaining buy-in for secure wrappers.

When you’re building a tool or designing a new process you should be hyper aware of existing developer workflows so you don’t add friction or slow down engineering. Make sure what you’ve built is well-documented, has had the bugs ironed out, and is easy for devs to use and integrate.

If possible, include features that provide value to devs if they adopt what you’ve built (e.g. telemetry) and try to hitch your security efforts to the developer productivity wagon.

Invest in tooling that gives you visibility - how is code changing over time? What new features and services are in the pipeline? What’s happening to your apps in production?

Differences

Netflix has gotten value from an internal security questionnaire tool they’ve built, while Snap and Dropbox had their version rejected by dev teams. This was due to wanting to have in-person discussions and the lack of collaboration features, respectively.

While everyone agreed on the importance of having strong relationships with engineering teams, John argued that individual relationships alone are not sufficient: dev teams grow faster than security teams and people move between teams or leave the company. Instead, you need to focus on processes and tooling (e.g. wrapper libraries and continuous scanning) to truly scale security.

For most of the panel members, the security teams wrote secure wrappers and then tried to get devs to adopt them. The Dropbox AppSec team actually went in and made the code changes themselves. This had the benefit of showing them that what they thought was a great design and solid code actually had poor dev UX and high adoption friction.

Favorite Quotes

“What are all the ways throughout the SDLC where we can have a low friction way of getting visibility?”

-John Heasman

“Prioritize your biggest risks and automate yourself out of each and every one of them.”

-Divya Dwarakanath

“If you don’t have a solution to point devs to, then you finding bugs doesn’t really matter.”

-Astha Singhal

“You have to brutally prioritize. Work on the things that are most likely to bite you the worst, while keeping a list of the other things that you can gradually get to as you have time and as the security team grows.”

-Doug DePerry

“Hitch your security wagon to developer productivity.” -Astha Singhal

“First, invest in gaining visibility. Then start automating once you know exactly the situation you’re in and the data sets you’re dealing with.”

-Doug DePerry

“There’s no silver bullet, just lots and lots of lead bullets.” -Devdatta Akhawe

Don’t spend too much time trying to ensure you’re working on the perfect task to improve your company’s security. Choose something that makes sense and get started!

This panel was an awesome, dense braindump of smart people describing how security works at their companies. I highly recommend you read the full summary here. You can also check out the full transcript here.

Netflix’s Layered Approach to Reducing Risk of Credential Compromise

Will Bengston, Senior Security Engineer, Netflix

Travis McPeak, Senior Security Engineer, Netflix

abstract slides video

An overview of efforts Netflix has undertaken to scale their cloud security, including segmenting their environment, removing static keys, auto-least privilege of AWS permissions, extensive tooling for dev UX (e.g. using AWS credentials), anomaly detection, preventing AWS creds from being used off-instance, and some future plans.

Segment Environment Into Accounts

Why? If the account gets compromised, the damage is contained.

The Netflix security teams have built a nice Paved Road for developers, a suite of useful development tools and infrastructure. When you’re using the Paved Road, everything works nicely and you have lots of tools available to make you more efficient.

But there are some power users who need to go outside the Paved Road to accomplish what they need to do.

At Netflix, the security team generally can’t block developers- they need to avoid saying “no” when at all posssible.

Useful for separation of duties So the security team will instead put these power users in their own AWS account so they can’t affect the rest of the ecosystem.

Useful for sensitive applications and data Only a limited set of users can access these apps and data.

Reduce friction by investing in tooling to C.R.U.D. AWS accounts. If you want to do account level segmentation, you need to invest in some, for example, making it easy to spin, delete, and modify meta info for accounts. The Netflix cloud security team has invested heavily in these areas.

Remove Static Keys

Why? Static keys never expire and have led to many compromises, for example, when AWS keys in git repos are leaked to GitHub.

Instead, they want short-lived keys, delivered securely, that are rotated automatically.

Netflix does this by giving every application a role, and then the role is provided with short-lived credentials by the EC2 metadata service.

Permission Right Sizing

For many companies, it can be difficult to keep up with all of the services you’re running, and it’s easy for a service to get spun up that ends up being forgotten, if development leaves the company or gets moved onto a different. This represents recurring risk to your company, as these apps may have been given sensitive AWS permissions.

Netflix reduces this risk via RepoKid (source code, Enigma 2018 talk video). New apps at Netflix are granted a base set of AWS permissions. RepoKid gathers data about app behavior and automatically removes AWS permissions, rolling back if failure is detected.

When you build a cool tool, you gotta get a cool logo

This causes apps converge to least privilege without security team interaction, and unused apps converge to zero permissions! 🎆

RepoKid uses Access Advisor and CloudTrail as data sources. Access Advisor allows it to determine, for a given service, has it been used in a threshold amount of time? CloudTrail provides: what actions have been called, by when, and by whom?

Paved Road for Credentials

They wanted to have a centralized place where they could have full visibility into Netflix’s use of AWS credentials, so they built a suite of tools where they could provision credentials by accounts, roles, and apps as needed. If they could ensure that everyone used these tools, they’d know, for every AWS credential, who requested them and how they’re being used.

Before they built this tooling, developers would SSH onto boxes and access creds there, or curl an endpoint and do a SAML flow, but there wasn’t one solidified process to access creds, which made it difficult to monitor.

So the Netflix cloud security team built a service, ConsoleMe , that can handle creating, modifying, and deleting AWS creds.

Users can request credentials via a web interface using SSO or through a CLI

Another advantage of this approach is that when ConsoleMe is creating creds, it automatically injects a policy that IP restricts the creds to the VPN the requester is connected to, so even if the creds accidentally get leaked, they won’t work.

Because the cloud security team worked hard to make using ConsoleMe seamless for devs, they no longer see any devs SSHing in to an EC2 instance and getting creds that are valid for 6 hours, devs instead use the creds they receive from ConsoleMe that are only valid for 1 hour, reducing potential exposure time.

Benefits:

ConsoleMe provides a central place to audit and log all access to creds.

and all access to creds. Anomaly detection If someone is trying to request creds to a service they don’t own, or something is behaving strangely, they can detect those anomalies and investigate.

Their biggest win has been locking credentials down to the Netflix environment, so if the creds get leaked in some way there’s no damage.

Delivery Lockdown

Netflix uses Spinnaker for continuous delivery. Several hardening improvements were made, including restricting users to only being able to deploy a role if you owned the application in question, as you might be able to escalate your privileges if you chose a role with more than your current set of permissions, as well as tagging application roles to specific owners.

Prevent Instance Credentials from Being Used Off-instance

Goal: If attacker tries to steal creds (e.g. through SSRF or XXE), the creds won’t work.

See Will’s other talk, Detecting Credential Compromise in AWS for details.

They block AWS creds from being used outside of Netflix’s environment, and attempts to do so are used as a valuable signal of a potential ongoing attack or a developer having trouble, who they can proactively reach out to and help.

The more signals we can get about things going wrong in our environment, the better we can react.

Improving Security and Developer UX

One thing Travis and Will mentioned a few times, which I think is really insightful, is that the logging and monitoring they've set up can both detect potential attacks as well as let them know when a developer may be struggling, either because they don't know how systems work or if they need permissions or access they don't currently have. Oftentimes the security team plays the role of locking things down. Things become more secure, but also harder to use. This friction either slows down development or causes people to go around your barriers to get their jobs done. What's so powerful about this idea is the point that the systems you build to secure your environment can also be used to detect when these systems are giving people trouble, so the security team can proactively reach out and help. Imagine you were starting to use a new open source tool. You're having trouble getting it to work, and then the creator send you a DM, "Hey, I see you're trying to do X. That won't work because of Y, but if you do Z you'll be able to accomplish what you're trying to do. Is that right, or is there something else I can help you with?" Holy cow, that would be awesome 😍 One thing I've heard again and again from security teams at a number of companies, for example, in our panel Lessons Learned from the DevSecOps Trenches, is that to really get widespread adoption of security initiatives more broadly in your org, the tooling and workflow needs to not just be easy and frictionless, it ideally also needs to provide additional value / make people's lives better than what they were previously doing. Keep this in mind next time your security team is embarking on a new initiative. After all, a technically brilliant tool or process isn't that useful if no one uses it.

Detect Anomalous Behavior in Your Environment

Netflix tracks baseline behavior for accounts: they know what apps and users are doing, and they know what’s normal. This let’s you do neat things once you realize:

Some regions, resources, & services shouldn’t be used 🛑

Netflix only uses certain AWS regions, resources and services - some they don’t use at all. Thus when activity occurs in an unused region, or an AWS service that is not used elsewhere generates some activity, it’s an immediate red flag that should be investigated.

Unused Services

A common attack pattern is when one gets a hold on some AWS credentials or has shell access to an instance, you run an AWS enumeration script that determines the permissions you have by iteratively calling a number of API calls. When unused services are called, the Netflix cloud security team is automatically alerted so they can investigate.

This approach has been used to stop bug bounty researchers quickly and effectively.

Anomalous Role Behavior

This is the same idea as for services, but at the application / role level. Applications tend to have relatively consistent behavior, which can be determined by watching CloudTrail.

The cloud security team watches for applications that start behaving very differently as well as common attacker first steps once they gain access (e.g. s3:ListBuckets , iam:ListAccessKeys , sts:GetCallerIdentity , which is basically whoami on Linux). These API calls are useful for attackers, but not something an application would ever need to do.

Future

Travis and Will shares a few items on the Netflix cloud security team’s future road map.

One Role Per User

Traditionally Netflix has had one role that’s common to a class of users; that is, many apps that need roughly the same set of permissions are assigned the same AWS role.

However, if there are likely at least slight differences between the permissions these apps need, which means some apps are over provisioned. Further, grouping many apps under the same role makes it harder to investigate potential issues and do anomaly detection.

In the future, when every user/app has their own AWS role, they can guarantee least privilege as well as do fine-grained anomaly detection.

Remove Users from Accounts They Don’t Use

Will and Travis would like to automatically remove users from AWS accounts they don’t use. This reduces the risk of user workstation compromise by limiting the attacker’s ability to pivot to other, more interesting resources- an attacker who compromises a dev laptop only gains access to the services they actively use.

Offboarding is hard. Devs may stop working on a project, move between teams, or leave the company. Having an automated process that detects when someone hasn’t used a given account within a threshold amount of time and removes the access would significantly help keeping things locked down over time.

Whole > Sum of the Parts

All of these components are useful in isolation, but when you layer them together, you get something quite hard to overcome as an attacker, as there are many missteps that can get them detected: they need to know about the various signals Netflix is collecting, which services are locked down, etc. The goal is to frustrate attackers and cause them to go for easier targets.

Starting Strength for AppSec: What Mark Rippetoe can Teach You About Building AppSec Muscles

Fredrick “Flee” Lee, Head Of Information Security, Square

abstract slides video

In this talk, Flee gives some excellent, practical and actionable guidance on building an AppSec program, from the fundamentals (code reviews, secure code training, threat modeling), to prioritizing your efforts, the appropriate use of automation, and common pitfalls to avoid.

All while using building a weight lifting as an analogy.

I never expected to be including a weight lifting book cover in my security talk summaries, but here we are

To be honest, I don’t normally like talks that are “security is like X,” but this talk was fun, engaging, and chock full of practical, useful advice. And now I have an 8 pack, thanks Flee! 💪

Key Takeaways

The core points from Flee’s talk:

Start small with your program

Start with things where you can start seeing some wins on day 10, don’t only invest in things where you’ll start getting value 2 years in. You can’t be Google and Netflix tomorrow.

Specificity + Frequent Practice == Success

In times of crisis, we fall back on what we’ve practiced the most. This is what’s going to make you successful.

Measure everything!

That’s how you convince yourself you’re making progress and get buy-in from management.

You are not Ronnie Coleman - don’t use his program (yet)

If you’re just starting out or have a small program, it might not make sense to adopt what the biggest/most complex teams are doing. Pick the right thing for your company at the right time.

Everyone can do this

Your company doesn’t need an AppSec team to have a decent AppSec program, these are all things devs can do themselves.

Overview

Good AppSec is a muscle, it has to be trained and continuously practiced or else it atrophies.

If you’re just starting out, Flee believes the 3 core fundamentals for AppSec are:

Code reviews (security deadlifts)

Secure code training (security squats)

Threat modeling (security bench press)

The Fundamentals

Code Review

All security critical code should receive a code review prior to deployment. For example, AuthN/AuthZ components, encryption usage, and user input/output.

The “Security-critical” caveat is there because most teams don’t have the resources to review everything. Focus on the most important parts.

Developer Training

All devs should have received language specific secure development training at least annually.

Don’t show kernel devs the OWASP Top 10 - they’ll be bored as it’s not relevant to them.

Rather than using generic examples, it’s much more interesting and engaging to show devs previous vulns from your company’s code bases. This makes it feel real and immediate.

Emphasize how to do the right thing, don’t just point out what the vulnerability is.

You know you’re doing training right when attendees like it!

See some of Coleen’s thoughts in the CISO Panel and Leif’s talk Working with Developers for some great details on how Segment makes security training for their devs fun and engaging.

Threat Modeling

All new significant features should receive a design review, lead by developers with the security team assisting. AppSec should be there as a coach/assistant. This is an good way to get developer buy-in for security.

Document the results and plan mitigations for identified risks.

Getting Started - Beginner Gains!

Many teams start off trying to solve every vulnerability class. This is too much and be overwhelming. Instead, pick a handful of key vulns to focus on. This focus can enable you to make a big impact bin a relatively short amount of time.

Pick issues that are important to your business first. Are there any vuln classes that continue to pop up, from pen testing, bug bounty, and/or internal testing? If you’re not sure which vulns to focus on initially, that’s OK, you can pick a generic standard list, like the OWASP Top 10.

You’ll find that your weakest/most neglected areas improve quickly. Start small and learn!

Focus on Critical Areas

What’s most important to your business? Which teams historically have the most problems? Help them level up.

Aim your reviews at areas that return the biggest bang for the buck (e.g. high risk apps).

Get progress in a safe, refined way, for example, by defining processes or using checklists. Following the same established routine every time => get the same quality of results.

Understand the languages and frameworks your company uses. If you don’t know the stack your devs are using, you won’t be able to give them very good advice, and it damages their trust in you if you make generic or wrong recommendations.

Get help from devs - they can give you feedback on if you’re pushing security forward in a safe and sane way. Devs know the code the best, know what it’s supposed to do, etc.

Adding automation helps you scale, but it’s not where you should start. Nail the fundamentals first: understand where you are and the problems you have and target them.

Static analysis can be a great tool, but it isn’t perfect and you shouldn’t start with it. Add it in after your company already has a good understanding of how to do code reviews. You should first have a good idea of the types of issues the tool should be looking for, otherwise you’ll end up wasting time.

Automation supplements humans. It doesn’t replace all manual effort: you still need a good code review program.

Don’t lose focus on risk reduction. This is ultimately what security is about.

Every Day is Leg Day

Some activities are so useful that they should occur in every project. For example, code reviews have a huge return on risk reduction vs effort. Flee’s never had an instance where a code review wasn’t useful. They’re not quick, but they’re valuable.

Make the activity mandatory, but make reduce friction where possible to ensure the process is easy and palatable to dev teams. The security team doesn’t necessarily need to do it, these activities can be done by devs in many orgs. Have the security team engage devs: “Hey, here are some security things that you should be looking for.”

Measure and Record Your Progress

You can’t manage what you don’t measure

If you don’t measure your progress, you won’t know if you’re succeeding. What are the common vulns you’re seeing? By team / product? For example, if a team has been having trouble with SQL injection, give them focused training. Also, tune tools to target your most common vulns.

Record verything

If it’s easy to log, collect it. Track ALL defects found, and make them visible: to everyone, not just security team. Devs will change their behavior when that info is public. This doesn’t need to be done in a negative way, but rather helping people keep themselves accountable. If devs don’t see this info collected in one place, they might not know themselves.

Adopt Standards and Expert Help

Leverage what you’ve learned to start enforcing coding guidelines. As you progress, you can become stricter on what counts as a code review “violation.”

Over time, try to move the AppSec team towards being “coaches” of reviews rather than “reviewers.” The AppSec team should be personal trainers for the developers performing the reviews. Security Champions can scale AppSec efforts really well.

Refining for Specific Goals

Once you’ve mastered some fundamentals, you can tweak your review process to target specific weaknesses:

Tune your code reviews/static analysis to find issues specific to your org (even non-security issues).

Re-inforce good practices with automation, not just pointing out problems.

practices with automation, not just pointing out problems. Build/user secure frameworks and services.

Pro-tip: A powerful way to get dev buy-in for static analysis is show them how to find a non-security coding bad practice they care about. For example, foo() needs to never have a hard-coded string passed in its second argument.

If you find this idea interesting, feel free to check out my ShellCon 2019 talk about finding code patterns with lightweight static analysis.

Pitfall: Taking on Too Much

Don’t expect to cover every project immediately, especially if you’re just starting with a small team.

Don’t get hung up on low-risk apps and vulns and not give time to other code bases or other classes of issues that are more impactful. Some vulns matter more than others.

Give your program time to grow. You’re not going to have FAANG-level security in a year, especially if you have a small team. Overpromising what you’ll be able to accomplish in a short amount of time to other people in your org can damage your reputation and their trust in you.

Pitfall: Bad Form

No developer buy-in

Devs are incentivized to ship features. You have to be mindful of their perspective and mindset. Ideally members of the security team have some dev background, as it’s incredibly useful to be able to speak engineering’s language and communicate how they think.

Generic secure development training

Flee finds most security training is bad - it’s generic and not customized to the types of technologies one’s company uses or the situations their devs typically face. This make it much harder to get dev buy-in and interest, as the training is taking some of their time.

Using Untuned Tools

Flee has never found a security tool that you buy, run without tuning, and the output is useful. Tools always require customization.

Pitfall: Trying to Use (the wrong) Shortcuts

There are no silver bullets to create a strong AppSec program, it takes time and effort.

Skipping documentation/metrics

You can’t just find bugs and put them in Jira. You need to document what you found and your results along the way so you can look back later and hold yourself and others accountable.

Don’t over rely on tools only

Despite what’s shouted from the RSA and BlackHat vendor halls, tools won’t solve all your problems.

Avoid FUD when trying to influence devs

Using FUD to try to motivate devs undermines your credibility and makes them less likely to listen to you in the future. You need their buy-in: that’s your best ally in getting security to run and work well.

What about insert your fav activity ?!?!

This talk discussed code reviews, secure code training, and threat modeling because they’re the fundamentals.

There are other things that are useful, but they don’t have to be there on day 1 (e.g. pen testing, ng(WAFs), RASP, etc.) They have their uses, but aren’t critical to AppSec.

Word on the street* is Flee used to write that on every performance review he gave his team. * I may or may not have just made that up.

Questions

How do you do targeted, tailored training in an org when you have many languages and frameworks?

This is hard. Partner with devs and security champions and have them help you create the training.

Rugged Software also has some useful ideas on integrating security into agile dev environments.

The Call is Coming From Inside the House: Lessons in Securing Internal Apps

Hongyi Hu, Product Security Lead, Dropbox

abstract slides video

A masterclass in the thought process behind and technical details of building scalable defenses; in this case, a proxy to protect heterogenous internal web applications.

Why care about securing internal apps?

In short: they often have access to sensitive data, and it’s technically challenging.

Compared to production, customer-facing applications, internal apps often get neglected by security teams. However, these internal apps often expose sensitive data or functionality, like a debug panel for production systems or a business analytis app with sensitive company data.

Unfortunately we can’t just firewall off these internal apps, as they could get accidentally exposed by a network configuration, they can be targeted by an external attacker via CSRF or SSRF, or there may already be an attacker in your environment (hacker or insider threat).

Internal app security is interesting and challenging due to the scale and heterogenity.

Scale: Most companies have a handful of primary production apps but hundreds of internal apps.

Heterogenity: The production apps likely use a well-defined set of core tech stacks for which the security team has built secure by default frameworks and other defenses. This is not scalable to dozens of other languages and frameworks. Further, internal apps may be built by people who don’t spend much of their time coding.

When embarking on this work, the Dropbox security team had the following goals:

The defenses should be scalable and agnostic to the backend’s tech stack.

Scalable development and adoption process - development teams’ adoption needs to be as frictionless and self-service as possible.

I want to emphasize how smart these goals are: any backend-agnostic defense would be a great win, and building a great technical solution isn’t enough, you need adoption. And if the security team needs to be involved when the defense is applied by every dev team or project, that’s going to eat up your time for the next… forever.

The Approach

Assuming we’re starting with a blank slate (no defenses), how do we scale up security basics to hundreds of web apps quickly?

tl;dr: Add network isolation and enforce that all internal apps must be accessed through a proxy, which becomes a centralized, scalable place to build defenses into.

This approach allows them to:

Add authentication that they log and review and enforce strong 2FA (SSO + U2F)

Access control - check permissions for all requests, using ideas like entry point regulation

Enforce that users are using modern, up-to-date browsers (not vulnerable to known exploits, strong security posture)

Add monitoring, logging, etc.

Other Applications: Later in the talk, Hongyi goes into detail of how they used this proxy approach to add CSRF protections (add SameSite cookie flags) and XSS protections ( Content-Security-Policy headers + nonces). He also makes the argument that using WAFs on internal networks can be effective, as internal networks shouldn’t have the level of malicious noise that your external websites will.

They have a number of ideas for other applications of the proxy that they haven’t yet done, including CORS allowlists, preventing clickjacking, invariant enforcement, sec-metatdata, channel-bound cookies, canaries, and more.

Reflections on the Approach

Benefits:

Gives you a centralized, scalable place to build defenses into.

App owners don’t have to build these defenses themselves.

It’s easy to deploy patches and changes - you only have to deploy bugs to one place, and it’s a service the security team has control over.

Provides a number of benefits to other security partners: InfraSec, Detection and Response, etc.

Tradeoffs / Considerations:

A team needs to maintain the proxy and be oncall when issues arise. This should be a team with experience maintaining high uptime systems. Consider partnering with your InfraSec team or an infrastructure team.

Reliability (and performance issues, to a lesser extent) are critical. Ideally, if the proxy goes down you fail closed (safe), but if there are critical systems behind the proxy, you’ll be blocking people from doing their job. e.g. If you have a monitoring system behind your proxy, if the proxy goes down, you won’t be able to troubleshoot.



Lessons Learned

Make developer testing and deployment easy; otherwise, this makes more work for the security team as you need to be heavily involved in adoption. The less friction there is, the more likely developers will want to use it.

Similarly, reduce mental burden on developers as much as possible, using blessed frameworks with defenses built-in and autogenerate security policies.

Make deployment and rollback safe and fast. You’re going to spend a lot of time troubleshooting policy mistakes, so make it quick and easy to reverse those mistakes. Make testing and debugging trivial.

Prioritize for your highest risks when rolling out the proxy; there will always be too much to do, so you have to brutally prioritize. For example CSP might be too much work in your organization, other mitigations may be enough- the migration work migght not be worth the risk reduction provided.

Which apps will reduce the most risk to protect?

Expect challenges; find trusted partners to try out ideas. In every company, there are people and teams that are more excited about security and willing to be early adopters of new security tools, services, and processes, even if they are currently half-baked, buggy, and high friction to use.

Start with them and iterate based on their feedback. This reduces the cost of failure, as you won’t be burning bridges when your solution isn’t perfect (which may cost you political capital if you bring it to less willing engineering teams before it’s ready). These early adopter teams can give you quick feedback, which allows you to increase your iteration speed.

On iteratively creating policies

Much of the work the Dropbox team ended up doing was refining policies. They could’ve saved a lot of time by initially adding the ability to enforce a relaxed policy while testing a tighter policy.

NIST CyberSecurity Framework

For tieing all of these ideas together into a coherent strategy, Hongyi recommends the NIST CyberSecurity Framework.

The NIST framework identifies 5 main security functions (column headings). Fill out this table and for each category, think about: Who are the teams you need to involve?

What is the tech you have to build?

What are the processes to create?

This process helps you figure out where are your company’s gaps are and where to invest going forward. This becomes your roadmap, which is also a great way to communicate your strategy to senior management.

Hongyi has borrowed some ideas from Adam Shostack: he considers these efforts like a security investment portfolio, where you’re investing, where you want to change those investments. This is a simple tool but very flexible, you can adapt it as needed.

Final Thoughts

When you’re determining your AppSec team’s priorities, aim to scale your defenses and your processes.

Internal security engineering can be a great place to experiment with new features and to train up new security engineers. For example, if you’re not sure about CSP or SameSite cookies, you can deploy them internally first and learn from the experience. It’s also a good way to get experience building safe frameworks, as thet reliability requirements are much lower than in externally facing production contexts.

Startup Security: Starting a Security Program at a Startup

Evan Johnson, Senior Security Engineer, Cloudflare

summary abstract slides video

In this talk, Evan describes what it’s like being the first security hire at a startup, how to be successful (relationships, security culture, compromise and continuous improvement), what should inform your priorities, where to focus to make an immediate impact, and time sinks to avoid.

This talk won’t help you build Google Project Zero in the next year, but it will give you a set of guiding principles you can reference.

If you’re lucky enough to be the first person working on security at a company then you have a great privilege and great responsibility. Successful or not, it’s up to you.

You will set the security tone of the company, and interactions with you will shape how your colleagues view the security team, potentially for years to come.

Evan’s Background

Evan has only worked at startups - first at LastPass, then as the first security engineer at Segment, then he was the first security engineer at Cloudflare, which he joined in 2015.

Security at a SaaS business

Evan breaks down “doing security” in a SaaS business as “protecting, monitoring, responding, and governing” the following categories: production, corporate, business operations, and the unexpected.

It’s also critical for the security team to communicate what they’re doing to their colleagues, other departments, the board, and the company’s leadership. Security progress can be hard to measure internally, but it’s an important aspect of your job, as is being accurate and setting the right expectations externally.

Security at Startups

Joining a startup is a great way early in your career to get more responsibility than anybody should probably give you.

If you want to be successful at a startup, there are 3 core aspects:

Relationships Security Culture Compromise and Continuous Improvement

1. Relationships

When you are building a security program from scratch

Your coworkers are your customers! Building strong relationships with your engineers is the key to success. Without this, you’ll have a hard time getting engineers to work with you.

Building strong relationships with everyone else in the company is important as well. Try baking security in to the on-boarding and off-boarding flow, as it’s a great place to add security controls and meet your colleagues.

Your relationships are defined in times of crisis.

When issues arise, assure people it’s okay, you’ll fix things together. As much as possible, be level-headed, calm, and methodical in times of crisis.

2. Security Culture

Think about the type of security culture you want to build within your company. If you pull pranks on unlocked laptops, does that foster the type of trust between your team and the rest of the company that you want?

3 tips to building security culture:

Smile, even if you’re in introvert.

Be someone people like to be around.

You should be making technical changes (i.e. have an engineering-first culture on security teams). Building things earns trust with engineers who are building your product, as it shows you can contribute too.



3. Compromise and Continuous Improvement

Meet people where they are and try to continuously work with them to make them more secure. It’s no one’s fault that they have bad practices or are insecure, but it’s your responsibility to do something about it.

Realize that it can sometimes take years to get the kind of traction you want to fix underlying issues. That’s OK.

Your First Day

You enter the startup’s office. There are huge windows with natural light and edgy but not too edgy art on the walls. Exposed brick and pipes are plentiful, it’s a bit hard to think over the cacophony of the open office floor plan, and there’s more avocado toast then you can shake a term sheet at.

If you don’t fulfill the above description, r u even a startup, bro?

The co-founder sits you down and asks:

So, what challenge are you going to tackle first?

How Security Starts

It’s not uncommon for companies to have an incident or two and/or get to a certain size where they realize they need someone to work on security full time. You might want to ask during the interview, or at least when you show up on the first day, what prompted investing in security now?

Remember:

You were hired because you are the expert: people will listen to you.

You can do whatever you would like: whether it’s good for security or not.

You’ll have little internal guidance.

What Should Inform Your Priorities

All of the following influence what you prioritize first:

B2B vs B2C

This is the biggest thing that will inform your priorities. Who are your customers and what do they want from your company / the security team?

If you’re a B2B SaaS business, you’ll need compliance certifications in order for your sales team to continue selling the product to bigger and bigger customers.

Company Size

If you’re the first security person joining a 500 person team vs a 100 person team, you’ll likely prioritize different things. If the team is already large, you may want to focus on hiring other members on your team to scale your efforts.

Customer Base

Who your customers are within B2B or B2C also influences things, for example, selling HR software to banks vs. marketing software for solo consultants.

Product

There are different expectations of the security bar of your company based on what your product is.

Engineering Velocity

If your company has a move fast and break things mentality then the security team needs a different approach than if the culture is a low and slow type of company.

Company culture

Some cultures are really open to security people joining, other don’t understand why security is important. Every company is different.

In summary, companies care more or less about different things and will have different areas of risk.

Startup Security Playbook

Evan breaks where you should focus your efforts into 4 core areas:

The common denominator of all of these is that they’re short in scope. You can get 95% of the way to at least initially addressing all of these in a quarter.

Security Engineering

This includes product / application security, infrastructure security, and cloud security.

SDLC and Security Design Reviews with engineers

Starting to work with engineers and embedding yourself in how they work pays major dividends later. If there isn’t a current SDLC structure, you can do inbound only. Offer to do code review and threat modeling, show value, and word will spread. Your ad-hoc process won’t have full coverage, but it’s a good start.

Understanding your tech stack by engineering



If you want to make a difference at a startup with the way people are building software, you need to build software.

If you want to learn about how your tech stack works at a deep level, you need to build software.

A great way to build relationships with engineers is to work with them and have them see you build things as well.

How your manage secrets, keys, and customer secrets

Take inventory of all of your really critical assets.

Secrets - Do you have a management system for secrets? Are people happy with it? Do you need to roll out Vault or some other secrets management system?

API Keys - How are they used in prod? How are they shared between developers? What API keys do devs have access to? Do you have engineers with prod cloud platform API keys on their laptops?

You can have a big impact in a short amount of time here.

Bug Bounty

Don’t rush into having a bug bounty, wait until you have the bandwidth to quickly and efficiently address submissions and resolve reported issues.

You need to make sure that you have a goal of what types of issues you want out of your bug bounty, and your bug bounty is set up to get that output. Otherwise, you will just waste cycles.

Detection & Response / Incident Response

Detection and Response is one of the hardest areas to get traction. It’s also something that spans a ton of different domains: production, corporate, incidents, applications, infrastructures, SaaS…

Basic Incident Response Plan

Have a plan, get people to understand that plan, and make sure you are looped in when things go awry.

Set up a commmunication channel- people will start reporting things immediately, especially if you don’t already have some logging and monitoring set up. Create a way for people to tell you when things are on fire.

What are your top security signals for the organization?

What really matters for security, and how do you get insights in to them?

Consider starting with monitoring AWS key usage, access to applications in your identity provider, and DNS at corporate offices.

Establish a communication channel with rest of company

How do people talk with you? How do people get ahold of you when they need you? This can be as simple as an email alias.

Logging Strategy

Where are you going to store logs over the long term?

Compliance

Public facing security docs are great

Something you can put on your website with technical details about the security things you’ve done that people can reference. Have a security page and a security@ alias for people to report bugs.

Knowledge Base

The best use of your time is not completing questionaires for sales teams. Find a way to make it self-service.

Understand existing commitments

Before security people join a startup it’s common for the business to make commitments to future compliance standards that they might not be ready for, but might not have any idea how hard it will be. Sometimes that’s why you were hired in the first place.

Ask management what compliance commitments your company has made.

GDPR and current laws

Make sure you comply with all of the relevant laws.

Corporate Security

Identity and Access Management

You need a way to manage your applications and access to them. Corp and prod are both important, but corp may be easier to address first.

Endpoint Security

Tablestakes for the long term. It’s better to get this done sooner rather than later, because it’s easier the smaller your company is.

On-boarding and Off-boarding

You can bake a lot of security (and hopefully usability) into these. They’re also tightly coupled with Identity and Access Management: do you remove people from prod when they no longer need access?

Workplace security

How do you protect people in your space? How do people badge in? Do they have to wear badges? Do you have doors propped open? Many startups start in small lofts and when they get a bigger space, they’re not used to hanndling these types of issues. Have procedures and policies for the ways visitors visit.

Personal Stories

At Segment, they deleted every engineer’s AWS key that they used on a day-to-day basis and gave them an equivalent role that they could access through Okta. This aws-okta tool is a drop-in replacement for aws-vault , which engineers were previously using.

Why was this such a success:

It raised Segment’s security posture massively.

The dev UX was really fast and smooth.

It was a massive change that engineering went to 100% adoption in 2-3 weeks of the security team working on it.

As the security team, it’s easy to say that you’ll handle all of the secrets for devs. But, Evan quickly found that people assumed he’d also handle rotation, ownership, managing the 