His 8am keynote was much better attended than speaker Gary McGraw expected. In fact, given the evening festivities that normally occur at conferences, he wouldn't have shown up for a talk that early—except that he was giving it, he said with a laugh. But the keynote was well worth it for those who did arise early; it was a highly animated, somewhat scattered—though often amusing—look at some of the history of software security, problems within the industry, and some initiatives that may help produce better code in the future. He gave his talk on September 19 at AppSec USA in Denver.

McGraw is an author and security researcher. He started out as "a software guy", not really "a security guy", but got into security "because software sucks". He studied philosophy in college, then got a PhD in cognitive science under Douglas Hofstadter in Indiana. He is now the CTO of the software-security consulting firm Cigital.

Some background

When Java was released, it was claimed to be a "secure" language. He and Edward Felten started thinking about what that meant—and whether it was true. That led to the 1996 book they co-authored: Java Security. Java's security was broken back in that time frame, McGraw said, and it has broken recently as well. The new problems look much like the problems that he and Felten identified.

"We need to do a better job", he said. To that end, he wrote Building Secure Software in 2000. It came about because he started thinking about the problems with Java and how two smart minds, Guy Steele and Bill Joy, got tripped up. He wondered where lesser minds could go to learn about secure programming. The "answer was 'nowhere'", so "we wrote the book".

Going way back, computers were kept in a separate room and programmers had to "genuflect to get access" to them. In those days, "computers were expensive and people were cheap", he said; "my, how things have changed". It is the opposite now: computers are cheap and people are expensive.

Much of the software in use today was created by and for "geeks". But now all that software is "used by normals, not geeks". The normals don't care about security, the cloud, mobile vs. desktop, etc.; they just want to be able to "get their stuff done", which includes things like shopping, banking, and Facebook, McGraw said.

In his view, there are too many system and network administrators and too few "software people" involved in security. He suggested that anyone who wants to work in security "learn to code". There is a trend toward talking about "application security", rather than software security; he is not a fan of the former term. It was coined by someone walking up the layers of the OSI networking model until they found one that sounded good, he said. It is a network-centric, rather than a software-centric, view of security.

In the early 2000s, network administrators were convinced they had secure networks, except for those pesky users—especially "users with compilers". So, those administrators referred any security problems to the developers. The developers hated the network administrators who were tasked with security, though, because their requirements got in the way of developing code. What's needed is someone in the middle, McGraw said.

There need to be "people whose job it is to do software security". Right now, "nobody" gets blamed when security goes awry, because no one is tasked with doing it. "Who gets the credit when things go right?", he asked. But, "nobody", as suggested by the audience, was not the answer he was looking for on that question: "the CEO, of course", he said with a chuckle, "it's capitalism 101". More seriously, though, "there should be somebody that gets fired when software security goes badly". That team should have a "large budget and lots of people".

The bug parade

One of the issues for the software-security industry is "the bug parade", McGraw said. There are two types of security problems: bugs in the implementation and flaws in the design. It is "way easier" to deal with bugs than to handle the design flaws, so the industry tends to pretend that "it is all bugs". It also pretends that static analysis tools will find all of the problems. McGraw wrote one of the first static analysis tools, and now there are "lots of good ones"; he recommended using those tools, but "they won't solve the problem".

The number one bug, which accounted for more than half of the computer emergency response team (CERT) alerts in the 1990s, is buffer overflows. He had a code snippet in his slides [PPT]:

char x[12]; x[12] = '\0';

He asked what was wrong with that code and noted that the main problem is that it is hard to explain to someone that the twelfth element is actually indexed using 11. "How do you teach kids to count?", he asked. The decision to start indexing from zero was made for efficiency reasons; "C is left over from the days where computers were expensive and people were cheap".

Even "the bible" (the K&R C book) shows an example of "how not to get input", McGraw said. It has the equivalent of the following code:

void main() { char buf[1024]; gets(buf); }

That code helpfully puts the buffer on the stack and doesn't mention that an attacker can provide more than 1024 bytes of input. It is a recipe for a buffer overflow.

He then went into a bit of a language rant: "C sucks, C is bad, but C++ is worse". C++ is a "pile of stinking poo". There are hundreds of things you can do wrong in C, but C++ goes way beyond that: "don't use it, if possible". Interestingly (or, perhaps, tellingly), he did not really suggest a particular language to use, though C, C++, Java, and others were scorned throughout.

He then asked the Java programmers in the audience if they knew about "re-entrant code". Evidently not liking the response, he went into a short description of race conditions. Those are a big problem today and will only get worse with more (and larger) multi-core systems. They are "way more important than all the stupid web bugs", he said.

He also pointed to a long list of Java language bugs that were fixed 1996–2000, "but they're back". The Java sandbox has proved not to be the security barrier that its designers envisioned. In addition, trying to do static analysis on languages with dynamic binding is "ugly".

In general, we have a problem with dynamic languages and "we haven't even thought it through yet", McGraw said. JavaScript code is a moving target that resists analysis. Some languages, such as Clojure, are "doing it right", but others, such as Ruby, are doing it wrong.

He then turned to two of the biggest web application bugs: SQL injection and cross-site scripting (XSS). Both appear near the top of the OWASP Top 10—produced yearly by the Open Web Application Security Project (OWASP), which also organizes AppSec—but lists like that need to be applied sensibly. He told a (possibly apocryphal) story of an analyst who confidently told the customer that their code was safe from the #1 bug in the OWASP Top 10 (SQL injection); the customer replied that there was no database being used.

XSS is an example of a problem where rethinking is required. Anyone can find XSS flaws with various tools and fix them one at a time, he said. But Google rewrote the API for its web applications so that developers essentially can't do the wrong thing. XSS flaws have dropped to zero and have stayed there since that change.

The key is to "find the easy bugs now" and to fix them. It's a waste of time to argue about how to find them (i.e. different styles of analysis tools), rather than spending the time to fix them. "We have got to fix what we find", McGraw said.

The division between implementation bugs and design flaws is roughly 50/50 (though he liked the answer he once got from an audience member: 70% bugs and 70% flaws, which is the ratio he adopted for the rest of the talk). We are finding more bugs today, which is a good thing, but it is because we are looking for them. We are "standing under the light" finding the bugs there, but ignoring the "darkness over there", which is where the design problems live. If we are going to solve the security problem, we are going to have to start "talking about the other half of the problem".

To that end, McGraw has been working with other security researchers as part of the IEEE Center for Secure Design. Representatives from multiple companies (Google, Twitter, Intel, McAfee, ...) brought various design flaws they had faced to the group. From those, they came up with "Avoiding the Top 10 Security Flaws", which is "very high-level advice" designed for architects. It is also the first IEEE document released under a Creative Commons license (CC BY-SA), he said. "Yes, I made IEEE put something out under CC".

Software security zombies

McGraw then presented some of his "zombies". Normally zombies are bad, but "in my case, zombies are good". His zombies "eat your brain and live forever". They are a collection of "obvious ideas" that need to be spread more widely.

The software-security industry is growing at 20% annually, which is twice as fast as the (much larger) computer-security industry. But some industries are just getting started: retail, for example, he said with a chuckle. Unfortunately, retail is "doing it wrong" by hiring people from the government, which is five years behind the rest of the industry. He implored those present to help spread the ideas: "it is up to us to repeat the obvious to people who don't know it yet".

The first of his zombies is that "old school security" is reactive, which does not work. The idea behind a firewall (i.e. to put "something between the bad and good") requires a perimeter, which no longer exists in today's networks. The "penetrate and patch" strategy is flawed. Waiting until a product is "finished" to test it for security problems is way too late.

There is also too much weight placed on penetration testing ("pentesting"), he said, and repeated it three times. Pentesting is important and should be done, but "if that's all you do, you are an idiot". He noted that the standard approach is to hire a pentesting firm, which then only reports three of the five bugs it found. From those reported, one bug is fixed, one is partially fixed, and the other is ignored—then "declare victory".

Over-reliance on security functions like cryptography is another problem. "We use SSL" does not mean there are no problems with the security of the system. There is no "magic crypto fairy dust" that can be sprinkled on the product as the last thing. Security is a property, like quality or reliability, that has to be built into the product.

Another zombie is "the more code you have, the more bugs you have". Companies are producing more code, which means they will have more bugs. Even though the defect density is dropping, "which is fantastic", the rate at which it is dropping is not keeping up with the rate at which new code is being created.

Integrating security into the software development life cycle (SDLC) is another zombie. First off, anyone who thinks they work somewhere with one SDLC works "somewhere with more dogs than people", McGraw said. That is why it is important for any recommendations to work with all SDLCs. He is evidently not a fan of agile methodologies, calling them "extremely bad programming, fast!", but it is a waste of time to argue about development methodologies, he said. Instead, ensure that security best practices can be applied to whatever is being used.

There is no "badness-ometer" for security; you cannot test something into being secure. Management would love some kind of meter that would indicate that a product is secure, but it doesn't exist. Given that the halting problem means we cannot even determine if a particular program will stop, there is no hope for the badness-ometer, he said.

The final "bonus zombie" is to "fix the dang software". Security people should be fixing the problems that they find. If your division is only in the business of finding problems, "everyone will want to eliminate that organization", he said. Throwing rocks is easy, but security organizations also need to help fix the broken code.

BSIMM

McGraw wrapped up his talk with a brief description of the "Building Security In Maturity Model" (BSIMM), which is a study of software-security initiatives at multiple companies. The idea is to gather data from these companies and to build a model to describe the data. The fifth iteration of the study involved some 67 companies, which had roughly 3000 security people trying to control the work of 272,000 developers. Those companies "may all be doing it wrong", he said, but they are "doing it the same way".

The idea is to be descriptive of what was observed at the companies. The team "went into different jungles", where they "observed monkeys eating bananas". "Are bananas good?", he asked. The answer is "we don't know", but it was observed in 67 jungles. The average size of a software security group is 1.4% of the total number of developers. Is that the right number? Again, no one knows, but it is what 67 companies do.

If you look at the companies involved in the study and "don't want to do it like them", then BSIMM will not be helpful, he said. But it does provide measures of various security-related things that other companies can use to judge their own practices. In order to improve, there must be some kind of measurement. BSIMM is one measuring stick that may be helpful. For more information, he recommended an article he wrote for SearchSecurity.

In passing, he noted that there were no government contractors or agencies involved in BSIMM. That's because the government is five years behind the industry. It is "good on offense", he said, but not on defense in the software-security realm.

Those interested in finding out more about all of the topics he discussed should check out his monthly column at SearchSecurity, as well as his Silver Bullet podcast. Also, the software security book series contains three separate books, including his Software Security, which provide lots of useful information. He suggested that people should read his book, even if they get it via BitTorrent. "I don't care", he said, even if everyone in the room bought it, "I get like $6", he said with a laugh.

McGraw's closing statement was that there are "big issues to grapple with" in software security. If we are going to succeed, we "need to do it together". His efforts with entities like the Center for Secure Design and the BSIMM are evidence that he is practicing what he is preaching. The bigger question may be whether developers and companies are listening to his sermons.

Comments (13 posted)

At the 2014 ATypI conference in Barcelona, a pair of talks described the recent joint effort by Google and Adobe to build what amounts to the largest font project ever undertaken. The font in question is free software, available under the Apache 2 license. Its branding varies between the companies—Google's version is known as Noto Sans CJK, while Adobe's is Source Han Sans—but, in either case, the font is the first open-source "pan CJK" (Chinese, Japanese, Korean) typeface. The character set it implements maxes out the possible size of an OpenType file's internal tables, with 65,535 characters in each file. Including all of the weights and variants, the project developed nearly 500,000 characters—an effort that pushed the limits of the design process and of the font-building process alike.

On the first day of the event, Caleb Belohlavek from Adobe and Stuart Gill from Google presented the completed font itself and discussed some of the technical challenges involved in the project. On the fourth day, Masataka Hattori, Ryoko Nishizuka, and Taro Yamamoto spoke about the design and publication process.

Abundant characters

Noto/Source Han covers all of the symbols needed to write the four major languages that commonly use Chinese "Han" characters: Traditional Chinese (written in Hong Kong, Taiwan, and Macau), Simplified Chinese (written in mainland China and Singapore), Japanese, and Korean. Simplified Chinese, as the name might suggest, uses the same characters as Traditional Chinese, but it incorporates numerous intentional reductions in the number of strokes and it substitutes structural simplifications. Altogether, written Chinese contains tens of thousands of characters; there is not even full agreement on the exact number. A fair number of them are unique to proper names, and many are homophones—but that fact does not eliminate the need to support more than one of the characters—Belohlavek and Gill likened it to the difference between "Smith" and "Smythe."

Japanese and Korean each have their own scripts, of course (Kana and Hangul), but they regularly intermix Chinese glyphs as well. The Japanese and Korean Han variants, however, were first adopted from Chinese centuries ago, and today they differ in important ways—both from each other and from Chinese. The structures of the forms varies from language to language, but so do many stylistic details, such as how individual strokes are terminated or joined together. The Kana and Hangul character sets are fairly small, but the need to support Chinese characters as well makes any CJK font a complicated beast.

In practice, the sheer size of the CJK character set means that most publishers, online and off, are forced to mix-and-match multiple fonts in a document—particularly when there is a need for multiple sizes of text or varying levels of bold for emphasis. There have been precious few pan-CJK fonts ever made that could provide the full character set. In its coverage of the Noto/Source Han release, CNET found only three others, all with hefty (four-figure) price tags attached.

In 2009, this situation struck Adobe's Ken Lunde as one needing immediate attention, and he raised the issue at the 2009 Unicode Conference. Lunde's comments were noticed by the team at Google working on that company's Noto font "superfamily." Noto is Google's effort to create a high-quality typeface that covers the entire Unicode standard. The design (at least of the Latin characters) is visually similar to the Droid font developed for Android and to Open Sans, which Google uses in its branding and in several web applications.

After Lunde's talk, Google approached Adobe with the idea of commissioning a pan-CJK typeface that would harmonize with Noto and would also be usable with Adobe's Source Pro family. As the speakers put it, "then two lawyers entered a room together and two years went by." Finally, the legal details were sorted out, and in 2012 the project began in earnest.

Collaborative development

Since the goal was to develop a font that appealed to local users, Google and Adobe decided to commission three type foundries with expertise in the local languages. At ATypI 2012 in Hong Kong, the companies met with Changzhou SinoType (from China), Iwata Corporation (from Japan), and Sandoll Communication (from Korea), and began hashing out a development plan.

Early on, it became clear that developing the font would push the boundaries of existing formats. Using the TrueType format, the team estimated, generating hinting for the fonts would, by itself, require two years. That left the Compact Font Format (CFF), a PostScript-derived format, as the only real choice—since it relies on the rendering engine to perform pixel-grid alignment, rather than requiring hints to be embedded for each individual glyph. That realization, Belohlavek said, was one of the major contributing factors in Adobe's eventual decision to donate its CFF rendering engine to the FreeType project. Without the Adobe CFF renderer, he said, the Noto/Source Han font family would not have been viable.

The next hurdle was the number of characters that can be included in any single file. OpenType, which can be a wrapper around either TrueType or CFF glyphs, supports only 65,535 glyphs in a single file. Unicode 1.0 introduced the concept of Han unification, which was intended to map the CJK characters from all four languages into a single character set. But, as mentioned earlier, the actual characters are often not drawn identically in the four languages.

Thus, an extensive undertaking was required to sort out which of the tens of thousands of "unified" glyphs could be reused in more than one of the languages, then to map those reuse relationships into a set of OpenType locl (for "locale") substitution tables. Ultimately, however, even that was not enough to squeeze under the 65,535-character limit, Gill and Belohlavek explained, "so you must get choosy; eventually it all comes down to opinions." A variety of the trade-offs that were involved, from working with Unicode Variation Sequences to which code points are mapped for which language, are described in the release notes.

Similarly, when the team set out to decide on a character encoding for each language, there were (as there are for most languages) many to choose from. It eventually chose to offer multiple character encodings for each language, hoping to be as widely useful as possible. It is not possible to please everyone, of course, as the speakers found out. During the Q&A session, one audience member took issue with the decision to implement Taiwan's Ministry Of Education (MOE) encoding chosen; eventually the speakers had to concede that there were trade-offs with any such choice, and they relied on the regional expertise of the various type foundries to make the right decision.

From design to deployment

Yamamoto, Hattori, and Nishizuka spoke in more detail about the design process in their session. Nishizuka was the principal designer on the project; Yamamoto heads Adobe's type team in Tokyo, and Hattori worked both on type design and production issues. The overall workflow involved Nishizuka developing a set of designs for Japanese, which were then sent to Sandoll to be adapted for Korean, and from there were sent to SinoType for the Traditional and Simplified Chinese work. Drawing tens of thousands of characters even once (much less in multiple weights) is a daunting proposition. Nisihizuka said she started by building a library of around 120 reusable stroke components, which were then used to build "a few hundred" core characters to be circulated to the other foundries.

Over many iterations, that process was used to build up the completed character set. But the sheer scale involved, it seems, made everything difficult. Nishizuka noted that removing the serif-like stroke endings, although a small change, radically reduced the file size when multiplied by 60,000 characters. Among the other challenges the designers cited was accounting for differences between the way the languages are used today: Japanese documents, for example, are increasingly required to mix vertical and horizontal writing. It was also not easy to develop a style that fit the comparatively open Japanese Kana characters, the compound Chinese characters, and the rather geometric forms of Korean Hangul.

Altogether, the development process took two years. Along the way, the team even found previously undiscovered bugs in the various encoding standards—such as a reversed component in the Unicode charts and mistakes in Taiwan's MOE standard. Lunde has written detailed blog posts about several of these issues, which make for highly educational reading about the dangers of extremely large specifications.

The first public release was made on July 15, 2014, and included the full character set in seven different weights. In addition to source, installable packages were built in a variety of formats: the full character set, a monolingual version for each language, region-specific versions (which are essentially a workaround to cope with the fact that not all software supports OpenType locl features), and sets that combine multiple .OTF fonts into OpenType Collection (.OTC) files—a rarely used file format that saves a bit of space when packaging multiple fonts together by allowing the fonts to share a common set of feature tables. The file sizes range from 15 to well over 100MB in size, depending on the version selected. Even in the packaging, it seems, the Noto/Source Han project is pushing the limits.

The font has already been spotted in the wild, and the speakers noted that enough feedback had come in from users that an update was released in September. Users who can get by with text written in European languages may regard the Noto/Source Han project as impressive largely for the scope of the engineering effort that it required. In fact, Belohlavek and Gill displayed a door-sized poster showing the entire 65,535 character set during their talk that proved to be quite a popular curiosity; even at that size, individual characters were all but unreadably small. But for the millions of users who write one of the four CJK languages, the availability of a high-quality font family as free software is undoubtedly a win.

Comments (9 posted)

It's been a crazy week for the Bash shell, its maintainer, and many Linux distributions that use the shell. A remote code-execution vulnerability that was reported on September 24 has now morphed into multiple related vulnerabilities, which have now mostly been fixed and updates released by distributions. The vulnerabilities have been dubbed "Shellshock" and the technical (and mainstream) press has had a field day reporting on the incident. It all revolves around a somewhat dubious Bash feature, but the widespread use of Bash in places where it may not really make sense contributed to the severity of the bug.

First bug

The bug, which was evidently introduced into Bash in around 1992, (ab)uses the environment variable parsing code by adding extra commands after defining a function. It was not really common knowledge that you could even define a function by way of an environment variable. It is a little-used feature that was also, evidently, not strenuously tested. An attacker who could set an environment variable could do:

VAR=() { ignored; }; /bin/id

VAR

/bin/id

The result is a shell function named, but that's not the most dangerous part. Because of a bug in the parser, Bash didn't stop processing the variable once the function was complete, so it would executeevery time the variable was parsed, which happens at Bash startup time.

Normally, just on general principles, one avoids giving attackers ways to set environment variables in shells. But, as it turns out, there are a number of ways for an attacker to do so. The easiest (and perhaps best known) way is to use the Common Gateway Interface (CGI) protocol. All CGI programs get invoked with certain environment variables set whose values are controlled by the client (e.g. REMOTE_HOST , SERVER_PROTOCOL , etc.). If Bash is invoked, either because the CGI program is a Bash script or through some other means, it will parse the environment variables and execute any code the attacker tacked on. Game over.

There may not be all that many Bash-specific CGI programs in the wild, but many Linux distributions (notably not Debian or Ubuntu) make exploiting the bug easier still: they link /bin/sh to Bash. So, any /bin/sh CGI scripts (of which there still probably aren't all that many) are vulnerable. More worryingly, CGI programs in any language that use the shell (e.g. via system() , popen() , or similar) may well be vulnerable.

Beyond CGI programs, there are a number of other possible vectors for attack. DHCP clients often invoke shell scripts for configuring the network and use environment variables to communicate to those scripts. Mail transfer agents (MTAs) may be affected; Exim and qmail are both vulnerable. Restricted OpenSSH logins (using the ForceCommand directive) can bypass the restrictions using Shellshock. And so on. Red Hat has compiled a list of some of the affected (and unaffected) programs.

Fixes

Michał Zalewski has a blog post from September 25 that describes the original bug along with the first patch. That patch worried Zalewski and others since it simply stopped the parsing once the function definition had been parsed; it would no longer execute code placed after the definition. But, as he pointed out, it still allowed an attacker to send a HTTP header like:

Cookie: () { echo "Hello world"; }

HTTP_COOKIE()

but intuitively it's a pretty scary outcome

The CGI program would then get an environment that contained a function called. It is unlikely that such a function could be called by accident, "".

As Zalewski described, that first patch was fragile because it made two assumptions about the Bash parser—both of which were later shown to be incorrect, as updates sprinkled throughout the post attest. He advocated Florian Weimer's approach, which puts the functions defined in environment variables into a different namespace by adding a prefix and suffix to the names. That should avoid allowing environment variables to be unknowingly set to functions by web servers and other programs.

Weimer's patch was eventually merged with a few tweaks (e.g. changing the suffix from "()" to "%%") by Bash maintainer Chet Ramey. So there is now an easy test to determine if a system is susceptible to the bugs:

$ foo='() { echo not patched; }' bash -c foo bash: foo: command not found

If the output shows "not patched", rather than the above, Bash is still vulnerable. Zalewski's post from September 27 describes some of the additional parser bugs found that led Ramey to adopt the namespace approach.

Meanwhile, distributions have been a bit whipsawed: updating Bash, seeing more bug reports, and updating again. At this point, things have mostly settled down on that front. All that remains is for users to update their systems. Since both Debian and Ubuntu use the Debian Almquist shell (Debian ash or dash) for /bin/sh , there is likely far less risk of an exploit, though Bash should still be updated.

More changes may be coming. Christos Zoulas suggested that a flag be added to govern importing functions through environment variables with a default to "off". That is a change he has made for NetBSD's Bash: "It is not wise to expose bash's parser to the internet and then debug it live while being attacked." Others have agreed that it would provide a stronger defense against other, unknown flaws in the parser. Scripts that use the feature (which seem to be few in number) could be changed to turn the feature on.

It should be noted that attacks are ongoing in the wild. For example, the LWN web logs are full of attempts to exploit the vulnerabilities.

While there is plenty to worry about with regard to Shellshock, some in the press have gone a little overboard. It is unlikely, for instance, that vast numbers of embedded Linux devices are vulnerable, medical-related or otherwise. The problem of embedded Linux devices that can't be updated is certainly real (and likely to bite us at some point), but Bash is not typically installed in the embedded world. Most are likely to be using BusyBox, which uses ash, so it is not vulnerable. Another large chunk of Linux devices, Android systems, use mksh (MirBSD Korn shell) rather than Bash.

Since Bash is a heavyweight shell, with a long startup time and high memory requirements, one might wonder why many Linux distributions make it the default shell for shell scripts. Debian and Ubuntu moved away from Bash for shell scripts for just those reasons. Slackware uses ash for its install scripts as well. These bugs may lead to a push to switch to a more minimal shell as the default for scripts in more distributions. This is likely not to be the last Bash vulnerability we see—especially now that security researchers (and attackers) are focused on it.

The "function definition via environment variable" feature seems to be of limited utility. Also, since it isn't all that well-known, it has largely escaped scrutiny until recently. Weimer mentioned that the feature appears to be used by test harnesses. The search he did in Debian's code repositories bears that out. While it may be tempting to disable the feature, as Ian Jackson tried, the namespace fix is backward compatible so existing users can continue to use it. Movement toward reducing or eliminating Bash for non-interactive uses throughout Debian (i.e. eliminating #!/bin/bash ), though, seems to be picking up some steam.

Like with OpenSSL and Heartbleed, Shellshock has exposed a project that is both critical to many Linux systems and is not completely healthy. Diego Pettenò described the problem in a blog post. Bash has a single maintainer, with a somewhat cathedral-like development process, which led Pettenò and others to be concerned about the shell long before Shellshock. It would seem prudent for the Linux community to be on the lookout for these kinds of problems now that we have been bitten twice recently.

There are a number of programs that underlie a basic, functioning Linux system, but it is not entirely clear what that number is, nor what projects belong in that set. OpenSSL was an obvious member (though largely ignored until recently); Bash is less so, even though it is now clear that it is used in ways that can easily lead to system compromise. It is probably long past time that some kind of inventory of this "critical infrastructure" is done. Once the projects are identified, some kind of health assessment and/or security audit can be done. We can be sure that those kinds of assessments are being done, at least informally, by attackers and black hats—we just don't get the benefit of their analysis.

Comments (47 posted)