At the heart of Apple's very serious charges against HTC -- among the most prominent manufacturers of Android-based phones today -- is whether the methodology Android uses to run Java programs using a specialized derivative of the Java Virtual Machine, called Dalvik, actually borrowed (or stole) ideas directly from the NeXT operating system. NeXTSTEP, you may recall, included radical innovations to the system kernel enabling inter-application communication (IAC), on a level far beyond anything Macintosh had used at the time. It was Steve Jobs' revenge against the company that had spurned him, and as history has borne out, Jobs was the victor in that little skirmish.

One of the ten patents Apple is defending in its lawsuit against HTC, drafted yesterday and filed this morning in US District Court in Delaware, deals specifically with NeXT's methodology. Apple acquired NeXT at the end of 1996, which is how Jobs re-entered the universe of Apple -- many believe, to have saved the company. Earlier that year, NeXT received a patent on a framework for IAC designed to compete with COM/DCOM and CORBA, the two other leading object methodologies of the time.

Though the complaint Apple filed with the US International Trade Commission this morning was officially under seal, it managed to get posted to Scribd this afternoon.


Apple's complaint describes the concept behind the first of its ten defended patents briefly: "As a general rule, separate processes execute independently (even when they may be executed simultaneously), and software within one process cannot directly access resources from or make calls on software within another process. Conventional methods for inter-process communication often required the software programmer to understand and utilize low-level operating system functionalities. The '721 patent describes a more efficient method for inter-process communication by way of a proxy object, which exists in a local process and acts as a local representation of objects that are located in a different process."

The typical problem with objects, or object-oriented components, in an operating system is that they don't really speak each other's language -- or more accurately, that they don't have a common language to speak. In Microsoft's COM, still at the heart of Windows, that problem is resolved through the System Registry. Windows components all share a few common functions, whose sole purpose is to enable them to discover one another. Once they've done that, they can share information from the Registry to connect them with each other's interfaces, and from there, they acquire each other's vernacular of functions they can perform. I've often said it's not so much like two foreign objects learning a common language, as much as deciding what storage room they can meet in that contains enough other objects they can point to, so they can identify what it is they want for now.

NeXT handled this problem differently, innovating new concepts that would replace COM's mutual interface exchange with what many at the time felt to be a more elegant methodology. The kernel of the NeXT operating system (and successors based on it, including the current MacOS X) is called Mach. Instead of relying upon binary interfaces, Mach let developers write IAC messages using Objective-C -- a language that for many years has been Apple's ace-in-the-hole. Mach handled all the translation, into streams of primitives that are shared with other objects using Internet-like communications protocols. Although objects still communicated with each other by way of proxy, as was the case with COM, the stream they set up with one another was a port. All the objects that have rights to communicate on that port, formed a domain.

From Apple's/NeXT's US Patent #5,481,721, which is central to the dispute: "The present invention provides a means for implementing an extensible, distributed program in which one task is responsible for creating other tasks to communicate with. This is a master/slave relationship; the master can provide the slave with send rights to the master's port as part of the creation process. When the slave starts executing, it sends a Mach message containing send rights to its port and a token for its first proxy back to the master. The master then replies with an indication of whether the connection is granted, and what token to use for the first proxy. This 'bootstrap-meta-protocol' results in both tasks knowing about each other, allowing communication to ensue."

What the objects say to one another over this shared port is almost immaterial to the patent; it's the method through which the port is negotiated and managed around which Mach's technology, and its associated patent, are based.

The concepts you've just read about up to this point may become the critical points of debate over the next few years, assuming Apple's case against Android goes to trial, and also assuming (quite likely) that it will involve more competitive manufacturers than HTC. It's one thing to be open, but you can't open somebody else's ideas.

But did Android do that? Even prior to today's case, there's actually been considerable debate among developers over why Android did not appear to take the Objective-C route.

Android's runtime interpreter is a kind of Java that isn't really Java. That is to say, Android's Dalvik runs Java code, but it compiles it down to a different bytecode than a true Java virtual machine (VM). The Android Developers' forum explains that the system does this for relative efficiency: "Every Android application runs in its own process, with its own instance of the Dalvik virtual machine. Dalvik has been written so that a device can run multiple VMs efficiently. The Dalvik VM executes files in the Dalvik Executable (.dex) format which is optimized for minimal memory footprint. The VM is register-based, and runs classes compiled by a Java language compiler that have been transformed into the .dex format by the included 'dx' tool."

Certainly it may seem efficient from the point of view of a developer who simply wants to get his app built. But in terms of processing efficiency, there's considerable complaint that Dalvik is actually inefficient, evidence for which appears in Google's own benchmark results for Dalvik apps compared to Sun Java and complied C.

While NeXT's patented IAC process adopts an elegant communications scheme, Dalvik approaches the problem of sharing via a more direct, if costly, route -- efficient in the sense that it doesn't appear to require a patent to explain it.

Seattle-based mobile software engineer Koushik Dutta cast a sharp spotlight on what he considered Android's inefficiencies, in a post for his personal blog in January 2009, comparing Dalvik to what he perceived as the much more efficient scheme adopted by Mono, the open source, cross-platform work-alike to Microsoft's .NET: "The two line summary is that Dalvik is designed to minimize memory usage by maximizing shared memory. The memory that Dalvik is sharing are the common framework dex files and application dex files (dex is the byte code the Dalvik interpreter runs). The first thing that bugged me about this design, is that sharing the code segments of dex files would be completely unnecessary if the applications were purely native. In Linux, the code segments of libraries are shared by all processes anyways. So, realistically, there is no benefit in doing this.

"If the battery is the primary constraint on the device, why is Dalvik so concerned with minimizing memory usage?" Dutta continued. "I am by no means a VM design guru...but I can say the following with certainty...Battery life is primarily affected by how much you tax the processor and the other hardware components of the device, especially the use of 3G/EDGE and Wi-Fi radios. Interpreting byte code will tax the processor and thus the battery much more than native/JIT [just-in-time] code...Modern (Dream/iPhone comparable) hardware running Windows Mobile is rarely memory constrained, and they don't have a fancy memory sharing runtime...If all applications can suspend and restore at the system's whim, then memory consumption is trivialized."

Dutta's explanation, in summary, appears to contrast the architecture of operating systems that adopt the principle of minimizing their memory footprints (Android) against those that take the more direct approach of suspending some apps for others to run (iPhone). Here's where it is important to note that Apple does not appear to be defending its iPhone , but rather technologies that are actually more relevant to MacOS.

Nevertheless, it may be the very inefficiencies that Dutta pointed out, that could be Android's saving grace in its upcoming battle against Apple. If Android is indeed as inefficient as some say it is, it may not be violating anyone's patent at all.