Backgrounder Remember the company we wrote about yesterday? The startup with the startling technology which claimed its storage and compute technology could run database queries 60 times or more faster than other systems, and offers ”limitless enterprise storage".

Well here's a Q&A that sheds a little more light on the tech.

The main relevant patent is Symbolic IO: 9304703, Method and apparatus for dense hyper IO digital retention, which was assigned in July 2015 and can be accessed here.

Check that out before or after reading the Q-and-A text:

How do the RAM modules in the [IRIS] system compare with NV-DIMMs?

Our modules (an NVRAM derivative) are tightly integrated with IRIS to maintain non-volatile status and ensure they operate correctly and efficiently. Prior to Symbolic IO’s ... software ... NV-DIMMs have typically been a seldom used, off-the-shelf commodity element, that have been relatively small in size (8-16GB). We are working closely with major semi-conductor companies to ensure densities 8-10 times larger than currently available in the marketplace. The RAM modules only serve as high-speed compute, while the NV-DIMMs act as the store modules.

What functions does the SymCE operating system carry out and in what way is data changed when it enters the system?

SymCE is Symbolic IO’s operating system. Within the attributes of this core operating system the “Conversion Engine” allows for ingestion of any type of data; whether that data is streamed, chunked, or a delta blocks. Data can be ingested from a local memory channel or any wire such as USB, Direct Wire, TCP/IP Packets, Fibre Channel, InfiniBand, iSCSI, etc. It also enables the ability for data to traverse one or several data paths singularly or simultaneously.

A Symbolic IO patent application diagram

Regardless of the data path, data is ingested by the system allowing the system to begin the i/o conversion process. The conversion process consists of primary data (D’ [prime]) that is dismantled into substrate components called “fractals” and processed into SbM (Symbolic Bit Markers). Unlike other technologies an advanced algorithm allows for substrate fluctuation.

One of the most compelling elements of Bit Marker technology is that it is lossless and does not require any additional overhead, unlike traditional compression schemes. The output of the conversion process is to store, transmit or both depending on use case. This entire conversion process does not require any delayed post-processing and happens in real-time for the end-user.

Symbolic IO refers to the ingestion conversion process as the “constitution” of data; whereas the data being stored or transported has been converted into this proprietary and unique language format and will remain in that state until a recall or read-back/read-request is received from the system. Constitution is a derivative of the data write function. These write functions can be associated with many fundamental write methods such as file writes, object writes, and block writes. For example block writes using common SCSI methods like read, write, etc.

For real-time data retrieval, a compute assembly process is required for reading back SbM’s; this process is called “reconstitution”. Reconstitution commands the retrieval operation of SymCE to materially realign the true data to it’s original form, triggering the reconstitution of one or many SbM(s) back into material data and in proper order.

How does SymCE interact with other operating systems needed to run applications, such as Linux or Windows?

IRIS (Vault) and SymCE ships with an embedded optional hyper-visor and, when running other operating systems, the IRIS merely appears as a computational engine (SymCE) and storage device.

Symbolic IO's 2U IRIS hardware

Do applications like Oracle run on the Symbolic box, as opposed to requiring a separate server?

Yes. IRIS is it’s own server platform that does not require a middleman. MetTel is running current email and database environments on top of the IRIS server.

In what way is data changed when it enters the system?

As soon as data is processed in the system it’s assigned Symbolic Bit Markers, which are the instruction sets that allow the media/data to be more elastic. It’s a seamless integration with existing platforms and operating systems.

If I had, say, 100 gigabytes of data, would that same information exist in the Symbolic system in a different size?

Yes, our Symbolic Bit Marker technology has a data reduction piece. The data is converted into marker technology, creating a new sub-state, as well as securing all data, which significantly reduces the size of the data allowing for more processing and storage space.

What is the purpose of changing the data format?

Brian Ignomirello’s entire theory when founding the company was based on changing the format of data and the concept that all digital data could be co-generated by implementing an advanced algorithmic compute method to materialise and dematerialise data in real-time utilising a proprietary “conversion engine”.

This conversion engine (CE) would become the most powerful and portable storage operating system ever conceived. The changing of the format is what enables the unique data reduction, the underlying security of the system (data unrecognisable to any system other than the one it was originally ingested in), and the unparalleled processor utilisation.

How does what Symbolic does with the data differ from compression?

Symbolic IO’s patent is based on being a non-compressive algorithm. Compression is a non-deterministic algorithm that requires many CPU cycles and no guaranteed results. By reformatting binary we see consistent results that de-duplication and compression cannot achieve. There is nothing stopping either us or a customer from applying additional compression techniques to further reduce data, if that is the sole primary focus of the end-user, and they were willing to accept the performance penalties associated with standard compression.

If your system has, say 160 gigabytes of DRAM, how does it cope with a much larger set of data?

Our market solution will have significantly more DRAM than 160GB. We are currently in beta testing with 16GB chips, will have 32GB chips by July, and will likely have the only 64GB NV-DIMMs in the world by October, and are in negotiations to have the first 128GB & 256GB NV-DIMMS in the world in Q1 2017. The amplification portion of the product allows us to amplify physical capacities as we had stored 1.12TB in less than 160 GB.

This is also the key point as it relates to 3D-XPoint comparisons. Both Samsung and Micron have had NV-DIMM technology for many years. Part of the problem was the cost per GB. We anticipate the 128GB NV-DIMM to cost approximately $1,500-$1,800 at volume, which is about $12.75GB, which could certainly be justified for being 10x faster than 3D-XPoint, but once our amplification is applied, even at a 4x data reduction average, the cost per GB of stored data drops to approximately $2.60/GB, which is extremely competitive to the prices we anticipate for 3D-XPoint ($3-5GB). ®