IPsec Vulnerabilities and Software Security Prediction Javantea TA3M May 18, 2015

IPsec-tools 0-day Exploit [sig]

IPsec-tools Null Dereference security advisory

Slides

Twitter

Hacker News

Reddit



IKEv1 Fuzzer

Security Advisory for Libreswan

Security Advisory for strongSwan



Warning

Fair warning: If you are an employee of a federal, state, city, or county government, or if you carry a clearance this document contains documents that you are not allowed to see. If you wish to become employed at such an agency, viewing this document will make your life more difficult.

Introduction

IPsec-tools is vulnerable to a 0-day exploit that I am making available today. It is a null dereference crash, so it's a denial of service against the IKE daemon. You may gawk and say that it doesn't deserve a medium rating, but remember that IPsec is critical infrastructure and this attack requires two small UDP packets. This denial of service violates the premise that security is built upon. More information about the impact can be found in the IPsec-tools Null Dereference security advisory. Do not consider this paper to be a reference for the vulnerability since it is mostly attempting to understand the minutia of this vulnerability and many like it.

If you're running IPsec-tools, replace it sensibly as soon as possible. Do not replace it with Openswan or Freeswan and do not replace it with an old unpatched version of another IPsec implementation.

This paper is in two parts: IPsec-tools Vulnerability and Software Security Prediction.

Outline: IPsec-tools vulnerability What is IPsec? Demo How Not To Use IPsec Why not? The Vulnerability What it means Detection IPsec Design Choices Who uses IPsec-tools? Aside: IPsec Scanners How many webpages need to be fixed? Are the others any better?

Software Security Prediction Crystal balls do not work "I came not to bring peace, but to bring a sword" Steps to Distro Security Aside: Bad Actors Never Think Of Themselves That Way The list of software to replace The list of software to fix Steps in the right direction Conclusion

Spinning Logo

Works Cited

What is IPsec?

IPsec is a piece of software that is often used for critical infrastructure. It modified the IP stack so that all layers below IP can be encrypted (TCP, UDP, etc) and even Layer 2 can be encrypted using a tunneling daemon. It's often described as a VPN and is used as part of a VPN, but don't confuse yourself about what IPsec is doing. IPsec provides encryption and authentication between two points speaking IPv4 or IPv6.

Encryption (optional)

Authentication (optional)

Confidentiality

Integrity

Availability?

Demo

IPsec-tools 0-day Exploit [sig]

Usage:

python3 repro_racoon_dos129.py Warning: Unable to bind to port 500. Might not work. [Errno 13] Permission denied Umm, okay. 129 ('\x81\xcf{r\x8e\xb6a\xdd9\xf1\x87cP\xb1\x05\xc7\x01\x10\x02\x00\x00\x00\x00 \x00\x00\x00\x00\x98\r\x00\x00 What it looks like on the server: sudo racoon -F -v -f server_racoon.conf >server_dos5m.txt 2>&1 & jvoss@ipsecu:~$ dmesg |tail [ 584.440533] AVX or AES-NI instructions are not detected. [ 584.442253] AVX or AES-NI instructions are not detected. [ 584.490468] AVX instructions are not detected. [13683.867215] init: upstart-udev-bridge main process (361) terminated with status 1 [13683.867223] init: upstart-udev-bridge main process ended, respawning [13683.867307] init: upstart-file-bridge main process (452) terminated with status 1 [13683.867313] init: upstart-file-bridge main process ended, respawning [13683.867386] init: upstart-socket-bridge main process (616) terminated with status 1 [13683.867392] init: upstart-socket-bridge main process ended, respawning [19912.460170] racoon[3701]: segfault at 100 ip 00007fe0eba84ce7 sp 00007ffff51db730 error 4 in racoon[7fe0eba5e000+93000] 2015-04-27 15:22:14: INFO: received Vendor ID: draft-ietf-ipsec-nat-t-ike-00 2015-04-27 15:22:14: INFO: received broken Microsoft ID: FRAGMENTATION 2015-04-27 15:22:14: INFO: received Vendor ID: DPD 2015-04-27 15:22:14: [169.254.44.43] INFO: Selected NAT-T version: RFC 3947 2015-04-27 15:22:14: [169.254.44.43] ERROR: ignore the packet, received unexpecting payload type 128. 2015-04-27 15:22:14: INFO: respond new phase 1 negotiation: 169.254.88.251[500]<=>169.254.44.43[42258] 2015-04-27 15:22:14: INFO: begin Identity Protection mode. 2015-04-27 15:22:14: INFO: received Vendor ID: RFC 3947 2015-04-27 15:22:14: INFO: received Vendor ID: draft-ietf-ipsec-nat-t-ike-02 2015-04-27 15:22:14: INFO: received Vendor ID: draft-ietf-ipsec-nat-t-ike-02 2015-04-27 15:22:14: INFO: received Vendor ID: draft-ietf-ipsec-nat-t-ike-00 2015-04-27 15:22:14: INFO: received broken Microsoft ID: FRAGMENTATION 2015-04-27 15:22:14: INFO: received Vendor ID: DPD 2015-04-27 15:22:14: [169.254.44.43] INFO: Selected NAT-T version: RFC 3947 Program received signal SIGSEGV, Segmentation fault. 0x000055555557ace7 in ?? () (gdb) bt #0 0x000055555557ace7 in ?? () #1 0x000055555557b775 in ?? () #2 0x000055555556c1a1 in ?? () #3 0x0000555555563fd1 in ?? () #4 0x00005555555658ec in ?? () #5 0x000055555555fc9d in ?? () #6 0x000055555555f273 in ?? () #7 0x00007ffff6953ec5 in __libc_start_main (main=0x55555555f010, argc=5, argv=0x7fffffffe738, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe728) at libc-start.c:287 #8 0x000055555555f3ec in ?? () (gdb) x/15i $rip - 12 0x55555557acdb: mov %eax,0x1c8(%rsp) 0x55555557ace2: mov 0x28(%r12),%rax => 0x55555557ace7: mov 0x100(%rax),%rax 0x55555557acee: mov 0x30(%rax),%rax 0x55555557acf2: test %rax,%rax 0x55555557acf5: je 0x55555557af00 0x55555557acfb: mov (%rax),%rdx 0x55555557acfe: lea 0x20(%rsp),%r13 0x55555557ad03: mov 0x8(%rax),%rax 0x55555557ad07: lea 0x1c(%rsp),%rbx 0x55555557ad0c: lea 0x30(%rsp),%rsi 0x55555557ad11: mov %r13,%rcx 0x55555557ad14: mov %rdx,0x30(%rsp) 0x55555557ad19: mov %rbx,%rdi 0x55555557ad1c: xor %edx,%edx (gdb) i r rax 0x0 0 rbx 0x0 0 rcx 0x5555558dbe40 93824995933760 rdx 0x5555558dbe40 93824995933760 rsi 0x0 0 rdi 0x5555558dbdc0 93824995933632 rbp 0x5555558dbdc0 0x5555558dbdc0 rsp 0x7fffffffd180 0x7fffffffd180 r8 0x5555558dbdc0 93824995933632 r9 0x7ffff6cf07b8 140737334151096 r10 0xbdb00 776960 r11 0x5555558da301 93824995926785 r12 0x5555558da300 93824995926784 r13 0x555555822460 93824995173472 r14 0x5555558da420 93824995927072 r15 0x7fffffffd260 140737488343648 rip 0x55555557ace7 0x55555557ace7 eflags 0x10206 [ PF IF RF ] cs 0x33 51 ss 0x2b 43 ds 0x0 0 es 0x0 0 fs 0x0 0 gs 0x0 0 How Not To Use IPsec IPsec comes with many vulnerable modes of operation. This was discussed in Schneier's "A Cryptographic Evaluation of IPsec" [Schneier-1999]. Since then, many IPsec implementations have made it easier for users to configure their systems to be secure by default and not insecure in their operation. This is not a simple task, though. The secure configuration requires a secure certificate authority or secure transmission of certificates. While this seems easy, most administrators do not have the time to do this properly using an air-gapped machine, so they do it improperly instead. Using secure transmission of certificates instead causes a problem of scalability, so securing more than a handful of systems becomes a major ordeal. Thus we see why many administrators have chosen to use a PSK instead. PSKs are vulnerable to many attacks currently being exploited in the wild. Part of this can be explained by bad configuration, but most of it is bad design. Here is a list of advice about running IPsec.

Don't use PSK. [Speigel] [Bellovin]

Don't use Agressive Mode. [CG]

Don't bridge your network with an attacker without using a firewall.

Don't bridge your network with a person or company that doesn't have competent IT staff.

Don't use IPsec instead of TLS.

Don't use IPsec to prevent 0-day.

Don't use IPsec to run a vulnerable system over TLS.

Don't run it if you don't have a competent IT person. If you do, you will probably be worse off than not running IPsec. Encryption is a multi-faceted system. Security is doubly so. All rules for security depend on assumptions that are not true if you do not properly implement safeguards. Confidentiality does not come without effort. Integrity does not come without effort. Availability does not come without effort. Anyone who tries to sell you a device that promises security (confidentiality, integrity, or availability) without effort probably has the ability to take that security (and more) away from you.

Don't buy a device that does IPsec without updating the software.

Don't use PSK.

Why not?

The NSA wants all your data and they have the computational power to take it. [Speigel]

That's why not.

In case you're curious what this data flow diagram means, I will explain. If you are using IPsec, the NSA intercepts your key exchange and your encrypted data. They send the data to their processing facility and if they are able to crack it, they send the decrypted data to their other projects which are designed to compromise your network.

The Vulnerability

ipsec-tools-0.8.2/src/racoon/gssapi.c:205:

if (iph1->rmconf->proposal->gssid != NULL) {

What's going on in this line of code is multiple dereferences of pointers. iph1->rmconf is null during the exploit. The daemon crashes.

Fuzzers missed this [Tsankov]. Hackers missed this. I would have missed it if I wasn't diligent and lucky.

Since NetBSD, FreeBSD, Android, and many products use IPsec-tools, it would be worthwhile to test them all to find out which are vulnerable and which are not. Android does not compile in GSSAPI according to the Makefile I checked, but we'll get back to why Android using IPsec-tools is bad.

Why isn't this patched already?

How long does it take to fix one bug?

It would take about 1 hour to fix and ensure that no similar bug existed nearby. It would take about 20 hours to notify everyone who makes IPsec-tools available through a product.

I am unwilling to do that work. Full disclosure and CVE was made for this problem. [Schneier-2007a]

The distrust of IPsec-tools that this vulnerability presents is insurmountable. If you think I'm wrong, point to the maintainer of IPsec-tools. If someone picks this up, I'm going to find another vuln in it and drop 0-day again.

On the day after I released the 0-day exploit, Christos Zoulas wrote a patch. I wish to warn users who apply this patch that it has not been tested sufficiently to ensure that there are not more bugs nearby. According to Rainer Weikusat, the patch does not actually fix the problem even if it does prevent the crash.

This bug violates the premise that IPsec's security is based upon. Someone who doesn't understand IPsec might say this is a low severity vulnerability. Someone who does distro security should see this as the final nail in IPsec-tools' coffin. This is why Android should abandon IPsec-tools. Not because they are vulnerable to this vulnerability, but because there are possibly more vulnerabilities that no one is looking for and no one will fix if someone finds them.

What it means

When the IKE daemon crashes, it may or may not be restarted.

If it is restarted, it gives the attacker as many attempts as they want to get the IKE daemon into the startup state. Result: unknown.

If it is not restarted, the keys do not get changed. When the IV gets repeated, the stream loses some confidentiality and integrity. Replay becomes easy. Result: possible compromise.

If the system decides that the two systems should no longer use IPsec, the system may revert back to IP silently. Result: possible complete compromise.

If the system decides to instead stop sending packets to the affected system, this becomes a denial of service. Result: complete availability compromise.



The most likely result is clearly the availability compromise. However, it is possible for the system to revert to IP silently which is by far the worst possible case scenario. Ubuntu and Gentoo by default do not restart the server, which leads to availability compromise for those systems that do not have keys exchanged and for any system that has keys exchanged, it will lead to an unknown result. The worst possible result for Ubuntu and Gentoo in their default configuration is possible compromise. However, since IPsec requires administrators to configure their systems, it is likely that many different configurations exist that do not conform to these assumptions.

When encryption is compromised, the layer below becomes available to the attacker. IPsec was designed to run over public networks, thus attackers are there.

An aside about Man-in-the-middle attacks: Man-in-the-middle attacks are not theoretical. It is practical and exploitable on WiFi (road-warrior use-case), corporate networks (flat topology corporation use-case), server racks (DMZ/segmented use-case), and backbone routers (ISP use-case). [Beale]

I am not trying to be alarmist. The odds of someone compromising your entire corporate network based on this vulnerability are low. It will take someone at least 4 hours to attempt to compromise your network based on this exploit. The NSA compromised an air-gapped network in Iran and destroyed centrifuges [Ferran]. If they want what you have, they can use this or five other exploits.

Detection

The exploit is easily detectable if logs are turned on or if users disconnect and reconnect regularly. It may be possible to create an Intrusion Detection/Prevention (IDP) signature to find attackers who are using this exploit against infrastructure you protect, but more likely you will have to run a honeypot to detect a sophisticated attacker.

IPsec Design Choices

IPsec is too complex a protocol. It makes many design choices which are incompatible with security, which not only make it difficult to implement, but also difficult to test.

The TLV payload design makes it possible for the first packet to contain dozens of payloads, each of which can contain child payloads which contain child payloads. Implementations have taken this to mean that the should have a handful of Vendor ID payloads to negotiate support for different extensions. TLVs are designed to reduce buffer overflows, not cause them. In the case of IPsec, it has created enough complexity for implementors that it is a constant source of bugs. Fuzzing is hindered by the "Next Payload" design of the packet. This design makes it so that the next payload is defined by the previous payload. This is similar to the way that linked lists work, but requires more than the usual conditional logic to implement. Developer confusion has led to many vulnerabilities and bugs that have mired IPsec security efforts.

With this design, one would expect it to be extremely flexible. Assuming that a person chooses, they can do a large variety of things with this protocol, not all of which are bad. However, this is not true flexibility. It lacks the flexibility of x.509, which is itself incredibly inflexible. A feature that x.509 has that IPsec does not: chain of trust. Almost no implementation of IPsec allows for root certificate authorities. This is a good thing and a bad thing. To bootstrap security, IPsec implementations are now supporting DNSSEC which has its own failures. Why do we need flexibility? Flexibility in a protocol makes it possible for two implementations to talk to each other while at the same time supporting features that each does not support. For example, a web server can support HSTS when the browser does not. In IPsec, it does not make sense for there to be a lot of flexibility in the data format. Key exchanges will operate in the same manner almost always. TLS for example is far less flexible than IKE and benefits a lot. This has not made bugs in TLS few, in fact recently we have seen many design flaws and implementation failures in TLS. But it is more likely than not that design flaws and implementation failures are being missed in IPsec because there are so many shallow bugs being found.

Privileges of the IKE daemon are often root. This is because the daemon needs to regularly set keys in the kernel, which should only be done by that daemon. Since many kernels do not have capabilities handling, implementations have chosen to simply run the IKE daemon as root. Specifically, I'm talking about IPsec-tools, but I am confident that there are other implementations running as root.

One of the design choices that makes IPsec incredibly hard to configure correctly is that any IP packet can go over IPsec. You might think: that makes perfect sense, why not allow any packet and let the users choose which they want to accept with firewalls? The problem is that users are time-strapped and are overly optimistic that their system will be secure by default, which is almost never true.

In 2013 I implemented an IKE client in 8 hours. This resulted in finding the null dereference vulnerability in IPsec-tools racoon. In 2014 I implemented an IKE server in 8 hours. This was a difficult process because the computations required are not implemented in Python, therefore they need to be written from scratch. IKE clients and servers have been written in Python, but most are not designed to make it easy for a user to modify. As an expert in network protocol implementation, I find that cryptographic protocols are some of the most difficult to implement. Those that lack cryptography are sometimes simple enough to implement in seconds using netcat or Python. IKE was especially difficult because it is so complex that many features that were required were then ignored by the implementation I communicated with. This design of course means that only people who are invested in IPsec will ever endeavor to test it. These people are too few to ensure the security of more than a few IPsec implementations.

Bruce Schneier publicly lambasted IPsec when he wrote a cryptographic analysis which came up with many faults [Schneier-1999]. While some have pointed out that the paper says that IPsec is the best we have, I rebut with this quote:

... we do not believe that it will ever result in a secure operational system. It is far too complex, and the complexity has lead to a large number of ambiguities, contradictions, inefficiencies, and weaknesses. It has been very hard work to perform any kind of security analysis; we do not feel that we fully understand the system, let alone have fully analyzed it. [Schneier-1999 26]

IPsec is a protocol that many people need for their business, so this design complexity makes everyone using IPsec less secure. The authors of IPsec should be ashamed of themselves. And yes, IPsec's RFC was written by the NSA. Who guessed that one? Their only excuse was that they designed it in 1998, which was at the height of the Crypto War. So why haven't we replaced IPsec? Many people have chosen to work on implementing IPsec correctly instead of replacing it.

This should not be news to anyone who has implemented IPsec. This should not be news to anyone who has used IPsec. The solution is not to fix our implementations of IPsec, it is instead to write a secure open source implementation of a different protocol and get people to switch. OpenVPN has done this, but I have not tested it, so I cannot say anything definitively about its quality.

IPsec-tools has a unique response signature, so you can write a Python script, an NMap script, or a Nessus script to detect it with few or no false positives (many false negatives unfortunately I believe). If you run FreeBSD or NetBSD with IPsec, you are running IPsec-tools. If you are running pfSense, you are running IPsec-tools. I wasn't able to run NetBSD, FreeBSD, or pfSense, so these statements need to be tested. Who wants to e-mail all the authors? Not me.

Since the release of this paper, 56 messages have been sent to ipsec-tool-devel in the month of May.

You don't need to run

nmap -sU -Pn -n -vvv -iR 100000 -p 500 -oA nmap_ike1

or

sudo nmap -sU -sV -O -Pn -n -vvv -iR 100000 -p 500 -oA nmap_ike2

or

sudo zmap ike

to find a long list of people using IPsec-tools. Try downloading the Internet survey. Try getting a Pro account at Shodan. Also, NMap doesn't find vulnerable servers as easily as my exploit. I even have a non-harmful way of detecting IPsec-tools.

Aside: IPsec Scanners

There are a large number of IPsec scanners currently scanning the whole IPv4 for IKE servers (396 unique IP addresses, 17541 packets received). What do you think they're up to? Some of them are just researching. Some of them are script kiddies. Some of them are spoofing their IP address. But my educated guess is that most of them have 0-day in hand and are exploiting every vulnerable VPN they come across. What evidence do I have to support my hypothesis? Only the packets that each of these are sending. A quick look at these packets in shows that these packets use Microsoft extensions to IPsec related to AuthIP, which is a proprietary protocol on top of IKE. Not all of the packets had these extensions, which means that more than a few different types of scanners being run. The fact that similar packets are being sent from different IP addresses means that these systems are running the same software. Since they are communicating on port 500 with me with whom they have no business relationship with means that they are either misconfigured, or they are intentionally scanning. Since many of these are sending on port 500 (318 unique IP addresses), we have to assume that they have root privileges or low port capabilities.

Count IP Address 915 92.156.83.10 413 88.182.227.2 379 222.64.125.46 366 202.153.47.42 156 92.139.69.91 146 195.87.244.8 134 2.12.52.14 132 5.107.86.214 115 93.100.141.178 113 212.57.6.226 102 212.21.46.34 98 41.214.10.33 92 114.35.125.229 90 41.136.2.241 90 41.136.18.209 85 79.165.141.243 79 67.68.122.156 79 46.14.13.125 64 124.148.219.105 60 190.199.39.243 59 203.59.158.2 57 95.29.206.187 56 185.56.161.133 56 154.70.115.98 46 41.136.47.233 45 86.235.41.154 43 212.87.172.4 42 89.157.119.185 41 50.189.102.250

How many webpages need to be fixed?

Hundreds of webpages need to be fixed. The reason I found this vulnerability is because I made the mistake of following someone's outdated IPsec How-To. My mistake is likely to be followed by many others. Don't trust How-Tos written ten years ago. Don't trust How-Tos written yesterday. The lesson to learn here is that the author of a How-To is probably not an expert, but simply a person who has time to write and publish a How-To. Instead of looking for these people, you should instead be looking for experts to guide the installation and configuration of critical software. Most experts don't have time to do the necessary testing, so they might not be the right people to look for either. Instead, if you need to support critical infrastructure, invest the time in learning what is insecure and what is secure. Test the systems you support using state of the art software and methods. Many of those who cut corners are being exploited at this very moment and there is no recourse.

Are the others any better?

The answer to this question is very complex, so yes and no. The competing projects strongSwan, Libreswan, and OpenVPN have recently found bugs. I myself found two similar vulnerabilities in strongSwan and two similar vulnerabilities in Libreswan after running my IKEv1 fuzzer. When I told the authors of strongSwan and Libreswan that they needed more security testing, they assured me that they have audited the software, so more severe vulnerabilities are rare. If more vulnerabilities are found in the near future, we can decide that they are incompetent or lying, but until we have evidence otherwise we should take their word. strongSwan, Libreswan, and OpenVPN have maintainers and contributors. Libreswan has development as recent as 12 days ago, strongSwan and OpenVPN both had development today. OpenVPN has been trusted by dozens of security companies. Libreswan and strongSwan are both being used in many routers around the world.

How many of these can be said of IPsec-tools? No recently found bugs, no maintainer, no development in years, and no self-respecting security company uses IPsec-tools. Or do they? From the chatter on IPsec-tools-devel, we can tell that there are still users.

Software Security Prediction

Crystal balls do not work

I am not here to give us a crystal ball or try to accurately predict the next month of Full-Disclosure. Instead of picking stocks, I am going to reveal a software security trend.

Today I provide a tool that will focus our efforts on replacing the bad actors of the software world and testing and fixing bugs in good software. If we decide not to fix these software that everyone uses, we will be worse off for it. Software is often said to be a lemons' market [Schneier-2007b]. This is not true. It is a very complex market where some people know so much they burst at the seams trying to report all the vulnerabilities they find. Some people honestly know nothing and that's okay, so long as their choices are made based on other's expertise and enough data that we can make good decisions with.

"I came not to bring peace, but to bring a sword"

Uninstall Windows.

Uninstall Adobe Flash.

Uninstall Adobe Acrobat.

Uninstall Java.

These three bad actors (Microsoft, Adobe, and Oracle) have shown themselves to be incapable of producing secure software. Microsoft has spent huge amounts of money but seems to be unable to strike at the root of their security problems. A good example is the "Pass the Hash" exploit that they refused to patch for a decade [Delpy]. It resulted in the compromise of hundreds of businesses around the world, big and small. The vulnerabilities in IE6, IE7, and IE8 are a good example of how bad design and insufficient testing produces substandard software which does immense harm to users. While many will point out that Firefox, Chrome, and Opera all had similarly exploitable vulnerabilities, even now that Firefox and Chrome have a majority of the browser market, they are not being targeted as much as IE was. This is mainly due to the severity of the vulnerabilities found in Internet Explorer, but also because users of IE did not upgrade their software. When your users don't upgrade their software, you have to produce higher quality software.

Adobe created the PDF format to be a container for every possible media format. Adobe Acrobat is a piece of software that is so complex because of this container format that it will never be secure. Adobe has made no attempts to make it secure, only to react to the many vulnerabilities found in the software. Adobe Flash was purchased from a vendor that didn't take security seriously. Flash attempts to make a fully Turing complete and featureful language available to web developers to be run in the browser as a plugin. Because they could add video before browsers were willing to implement it, it became the defacto standard for video in websites, so a majority of users installed the plugin to view these videos. The software was never properly tested, so vulnerabilities were found and continue to be found.

Oracle bought Sun Microsystems and along with it came Java, a language used so widely that it is on more systems than Windows or Mac OSX. It is in servers, Bluray players, Android phones, Minecraft, and until recently browsers. Oracle did not do proper testing or auditing of new features it added to the most recent versions of Java making it vulnerable to a large number of attackers. Now Java should only be used for Android, desktop applications, and on J2EE servers. Because of its flaws, it should be used only ever when there is no alternative.

I know a lot of people can't uninstall these software just yet. When you find the true name of a bad actor, it's important to add them to a list. We won't be glitter bombing these bad actors yet, instead we can start by creating systems that do not have their products installed. Do not put valuable data on systems that have their software installed. Do not type your valuable passwords into systems that have their software installed. This is hard. I know because I have tried it. If you find a solution that doesn't require users to expend enormous effort, please cite my paper.

Steps to Distro Security

Step 1

Step 1: Create a metadata file for each package from your distro. Copy or link all metadata files into a directory. Put a directory listing into a file. Order the list of packages by number of systems that currently use that piece of software.

For starters, this is the code for Gentoo using eix. There are better ways to do this, but this is effective.

EIX_LIMIT= 0 eix -I --only-names > packages1.txt mkdir security_metadata ; cd security_metadata for pkg in $( cat ../packages1.txt ) ; do mkdir $( dirname " $pkg " ) 2> /dev/null touch " $pkg " ln -s " $pkg " $( dirname " $pkg " ) __ $( basename " $pkg " ) done ls -1 |sed 's#__#/#g' > ../packages_sort.txt

Now to get these in order of how many systems. Run the first command on many systems and name the output file differently for each system. Put all these files on one system and do the following:

cat ../packages1*.txt |sort |uniq -c |sort -nr > ../packages1_sus.hist

Now we have a histogram that tells us which are the most important packages because they are installed on a lot of machines. For my four systems, 192 packages are on all 4, while 1102 are on 2 or more. This is out of 2328 packages. While even the smallest package can compromise a system, the ones that aren't used often can be pushed back and the ones that are used more often can be promoted. To use security data, we can use this line of code:

grep -h '<package name' /usr/portage/metadata/glsa/glsa-20*.xml | sed 's#<package name="##g;s#" auto="[^"]*" arch="[^"]*">##g' |sort |uniq -c |sort -nr > ../glsa1.pkg.hist

As you can plainly see, all of the major offenders are right up top. Don't get me wrong, these are people who have spent countless hours tirelessly searching for vulnerabilities and fixing them. They should be applauded. But we can learn from this which software needs the most work. For those that don't go above and beyond on their testing, their software should be blacklisted until they prove themselves.

Step 2

Step 2: Add a security contact to the metadata for that package.

You must be able to contact this person in case of a vulnerability.

This should be a security@project.tld address.

When this person goes on vacation, they need to give access to security@project.tld to someone who can deal with a vulnerability.

Ensure that you have recently contacted this person about a security bug and have received a valid response from them. If you don't have a security contact and the project hasn't fixed a bug in over 2 months, the project is not being properly maintained. Add it to the list of unmaintained projects. If the project doesn't have a security contact but has fixed bugs, find out whether the person who pushed the bug fix is willing to officially fix security bugs. If not, the project is not properly maintained. If the maintainer is on vacation and there is no other maintainer, make a note to contact them when they come back from vacation.

Step 3

Step 3: Once you filter the projects into maintained and unmaintained, start at the top of the unmaintained list and contact heavy users about possibly finding a maintainer.

Tell them that if the project is not maintained, it will be removed from the disto. If no maintainer/security contact is found, the project is no longer acceptable in the distro and must be replaced. Put the project into the list of projects that need to be replaced.

Step 4

Step 4: Once you filter the unmaintained projects into possibly maintained and needs to replaced, look for replacements for the project. If one exists, deprecate the project and let users know that a replacement exists. If users refuse to replace the project, let them know that they do it at their own risk and do not allow them to install it using your distro's package management. Instead provide a way for a user to download it manually and install it (like a Windows application). Use the distro security system to inform the user when they ask that the project is unmaintained.

For example, Gentoo could provide an overlay with projects deprecated due to security concerns. This would come with a set of GLSAs which describe the problem of outdated software. These GLSAs would not need to be distributed with the normal portage system unless the packages were being distributed with portage.

Step 5

Step 5: Require maintained projects to increase their security depending on their criticality. This is done by deciding the severity of a compromise of the system compared to date of the last vulnerability found. This will lower the value for projects that need security testing but will eventually decrease the time spent looking at projects that have good security from the get-go. Projects may decide to hide vulnerabilities to make their last bug found more distant, which means that when users report bugs in old software, this data can be integrated into this data rather than being lost in the bug tracker as just being fixed.

Step 6

Step 6: Require security testing tools used on critical projects to be used against all similar projects (high and low importance projects). While this will uncover many low severity vulnerabilities and waste time, it will improve the overall quality of source code without requiring duplication of efforts.

The cost of looking for bugs is greatly outweighed by the benefit of testing these systems.

Step 7

Step 7: Create a list of authors who have abandoned projects. Make sure that any project that they work on is treated more critically than it otherwise would be. This prevents serial offenders from contributing to a project during its maintenance phase, becoming the maintainer, and then abandoning it.

Step 8

Step 8: Use smell test to determine whether a project is being tested as well as it should be. This includes running splint, pylint, and other noisy "bug finder" tools on the software as well as taking into account gripes from users about bugs that aren't being fixed. This type of transparent gripe factory will provide data for users decide whether a piece of software is in high development and shouldn't be used in production or whether it's a lemon, like OpenSSL. This will make it possible to predict security based on the trend of improving security without lemons or bad actors.

Aside: Bad Actors Never Think Of Themselves That Way

Bad actors usually think of themselves as the good guy. They are the main developer of a popular open source software project. They are a well paid developer of a major piece of software. They are the father of two wonderful children. They are the judge that makes hard choices because no one would otherwise. They are the entrepreneur who supplies medication to people who are in severe pain.

Relativism and Utilitarianism are not valid excuses for doing harm. A person who uses relativism as an excuse will say: "I believe that the software I make is good for the community, so you are not correct when you say that I am a bad actor." To this I reply, when I see someone doing harm, I do not let them continue if it is my power to stop them. In the case of software, I can encourage people to stop using bad software. I do this by labeling it bad software and its authors as bad actors. This allows people who defer to my judgment to choose software based on these labels (whether they are correct or incorrect). A person who uses utilitarianism as an excuse will say: "I provide software that does what it is supposed to most of the time, this is enough because people are able to do their work." To this I reply, a software developer who does not inform their users that it has not been properly tested is setting them up to be exploited. This is harmful. Instead of providing software with vulnerabilities to users and saying nothing about it, you should tell them that there may be bugs to fulfill the Golden Rule. If you honestly don't want to know that there may be vulnerabilities in the software you use, you are making a terrible mistake. People who make terrible mistakes in regard to software should not be software developers.

So now we get to the point of how we are able to define bad actors and bad software. A person who does harm consistently marks himself or herself as a bad actor. It is our job to name them and reduce harm. Bad software has a consistent history of vulnerabilities found and no evidence of thorough security testing being done. You can say that a developer has "given up" on the software if more vulnerabilities are found consistently throughout the life of the product. The development life cycle of a piece of software has a phase where only maintenance is ever done. During this phase, no vulnerabilities should be added and bugs should be fixed. If a project remains in heavy development for a long time (which is true of many valuable pieces of software), all new features must be tested along with all old features. This is a huge cost of development because it means that a tester must be writing new tools to test the new features as they come out. Those developers that choose not to do this testing are walking the line of the bad actor -- setting their users up for exploitation.

What many software security professionals forget is that software vulnerabilities go away when fixed. After a certain amount of testing there are no more bugs to be found -- ever. The rule that software developers write 3 bugs per 1000 lines of code means that there are not an infinite number of bugs in code. A well-written project can find all the bugs and no one will ever find another. Projects that are small and simple enough can be vetted and never tested again. This gives us hope of running software without backdoors, without vulnerabilities, and without worry. Do we want to live in a world without vulnerabilities? Do we want to spend the necessary time testing software to live in that world?

The list of software to replace

The list of software to fix

GLSAs Package Name 34 firefox 31 wireshark 21 chromium 14 libpng 13 kdelibs 12 libxml2 11 freetype 11 glibc 11 python 10 gnutls 10 tiff 10 perl 10 sudo 9 poppler 9 lighttpd 9 gallery 9 file 9 kdegraphics 9 postgresql 9 gnupg 8 tor 8 vlc 8 v8 7 rsync ? irssi 5 gimp ? johntheripper ? screen ? libjpeg-turbo ? cairo 3 gtk+ ? librsvg 5 graphicsmagick 2 inkscape ? cgit 3 links ? gcc ? mesa radeon/ati/intel ? nouveau 3 git

Steps in the right direction

I know that many people are put off by my method of declaring projects unmaintained, untested, and bad actors. For those who would rather work on fixing systems rather than choosing which to replace, I recommend looking at projects that can improve the effectiveness of testing and patching. For example, if a system existed where I could give a maintainer a backtrace instead of a full description of the issue I found and the implications, that would save me hours per bug I find. The less hassle I have to deal with, the more bugs I am likely to find. This makes it possible for everyone to be more effective. The only solution that I have found is to create simple open source testing tools and giving them to the maintainer telling them that bugs will be found by running the tool. This way there need to be no discussion about the bug, just the knowledge that a bug exists and that the tool finds it. This only works for respected security researchers, of course. An anonymous person would not be able to convince the maintainer of a software project to run a complex piece of software in hope of finding a bug. Another example of something that would help would be a list of projects that have been tested by an effective tool. These things need not be connected to my philosophy of calling out bad actors in the software world and blacklisting their software.

Projects often come up that give us the ability to improve our methods. These projects are a good start toward a secure environment.

The first group of projects is bug bounties. Bug bounties allow hackers to earn a small amount of money for their efforts. While this isn't enough to pay a person a high salary for the work they are doing (they could make twice or three times as much in the security industry doing easy work), it is more than nothing. There is a lot to be said about more than nothing. Nothing is the amount of money I've been paid in the past year. Nothing is approximately how much a person makes when they do good research. More than nothing is how much a person makes when they send spam. Any motivation, we see is enough for some people, just not everyone.

The second group of projects is fuzzers. These projects test the security and quality of software against malformed data. This allows security testers to learn more about a project by spending their valuable time understanding.

The third group of projects is logical static analyzers. These projects are able to test the security and quality of software against logic. This allows security testers to learn more about a project by spending their valuable time understanding. I have left out closed source static analyzers because they are difficult to get ahold of.

The last group of projects are based on teaching people how to pick good software. Full Disclosure allows users to send advisories to other people and make it available to the public. Open Source Security allows users to discuss the security of the open source software they use and it allows anyone to request a CVE. CVE is a project to give numbers to each vulnerability as it is learned of. While it doesn't actually record every vulnerability, the Open Source Security list gives everyone a chance to let others know that they tried to get a CVE.

Conclusion

If security was a trivial matter, we wouldn't need to think very big about it. We would batten down the hatches and weather the storm. Instead, we're seeing thousands of medium and high severity vulnerabilities each year. I produce a 1-20 each year and I'm only spending a few hours here and there.

We sit on top of a wealth of knowledge. Our inaction causes us harm. Our action causes us harm. Don't worry, I have a plan. Instead of reactive security, we should reward those software projects whose proactive security projects find and fix bugs. If a bug is found in a piece of software with many eyes on it, why? We can't just pat them on the back every time they add a bug and then fix it. We want them to release the tools by which they tested their software. If there are no tools, there is no testing. If they haven't fuzzed, why in the world not? If they haven't run their software with Valgrind, why not? If they haven't run static analysis tools on it, why not? If the static analysis tool produced a ton of false positives, who is staking their reputation on that being true? Who is validating the results of the static analysis tool they use?

Spinning Logo

For those who would like to publicize the cause of getting people to stop using IPsec-tools, please use the logos here or create a new logo.

I am Javantea and I do a lot of hacking, but my real passion is Nanotechnology. I've been studying for over a year and I am looking for peers. I am planning on taking the month of June off to do radio astronomy, so if I don't respond to your e-mails be patient.

Thanks to Ann, my friends, Neg9, Melody, Batman's Kitchen, the organizers of TA3M. To those who think critically and speak out: Keep it up!

Works Cited

[Speigel] Speigel Staff. "Prying Eyes: Inside the NSA's War on Internet Security". Der Speigel URL: http://www.spiegel.de/international/germany/inside-the-nsa-s-war-on-internet-security-a-1010361.html

[Bellovin] Bellovin, Steven M. IPsec Key Management. URL: https://www.cs.columbia.edu/~smb/classes/s09/l13.pdf

[CG] CG. "Aggressive Mode VPN -- IKE-Scan, PSK-Crack, and Cain". CarnageOwnage & Attack Research Blog. December 7, 2011. URL: http://carnal0wnage.attackresearch.com/2011/12/aggressive-mode-vpn-ike-scan-psk-crack.html

[Tsankov] Tsankov, Petar. secfuzz IKE fuzzer. URL: https://github.com/ptsankov/secfuzz/

[Piper] Piper, D. The Internet IP Security Domain of Interpretation for ISAKMP. RFC 2407. 1998. URL: https://tools.ietf.org/html/rfc2407

[Maughan] Maughan, D., et al. Internet Security Association and Key Management Protocol (ISAKMP). RFC 2408. 1998. URL: https://tools.ietf.org/html/rfc2408

[Schneier-1999] Ferguson, Niels and Schneier, Bruce. A Cryptographic Evaluation of IPsec. 1999. URL: https://www.schneier.com/paper-ipsec.pdf

[Schneier-2007a] Schneier, Bruce. Schneier: Full Disclosure of Security Vulnerabilities a 'Damned Good Idea'. CSO Online. URL: https://www.schneier.com/essays/archives/2007/01/schneier_full_disclo.html

[Schneier-2007b] Schneier, Bruce. "A Security Market for Lemons". Wired. URL: https://www.schneier.com/blog/archives/2007/04/a_security_mark.html

[ASan] Address Sanitizer. Address Sanitizer. URL: https://code.google.com/p/address-sanitizer/

[Beale] Beale, Jay. Owning the Users. 2008 URL: https://www.defcon.org/images/defcon-16/dc16-presentations/defcon-16-beale-2.pdf

[Ferran] Ferran, Lee and Radia, Kirit. "Edward Snowden: U.S., Israel 'Co-Wrote' Cyber Super Weapon Stuxnet". ABC News. July 9, 2013. URL: http://abcnews.go.com/blogs/headlines/2013/07/edward-snowden-u-s-israel-co-wrote-cyber-super-weapon-stuxnet/

