Stay on Top of Emerging Technology Trends Get updates impacting your industry from our GigaOm Research Community

One of the challenges associated with the internet of things is figuring out where to put all that data. If you have dozens of connected devices talking to the cloud (and that is a big if) you’ve got to think about where that data lives, how to normalize it and how to grant others access to it so it becomes useful. Stephen Wolfram, the creator of Mathematica and the Wolfram Alpha search engine, on Wednesday introduced his version of a solution to this problem, called the Wolfram Data Drop.

In simple terms, the Wolfram Data Drop is a repository for data sent from a variety of devices such as Raspberry Pis, Arduinos, Electric Imps or other devices that are pointing at the Data Drop via a few lines of code. In this way, the service reminds me of other data repositories from the sophisticated, like IBM’s IoT Foundation service, to the more rudimentary such as Dweet and Freeboard. You point your device at a cloud and your data shows up there. From that point on, you can grant other services the chance to see and use that data.

What makes the Wolfram Data Drop so much more interesting is how it ties in with the other Wolfram services such as Wolfram Language or the Wolfram Data Framework. In a blog post discussing the Wolfram Data Drop Stephen Wolfram discusses how his eponymous services work together to make the data coming in from sensors work easily with the pictorial symbols in the language.

He also explains how data put into the Data Drop is saved using the Wolfram Data Framework (WDF), which means all the gets saved with symbols that dictate how the data should be interpreted.

[blockquote person=”” attribution=””]Here’s an important thing: notice that when we got data from the databin, it came with units attached. That’s an example of a crucial feature of the Wolfram Data Drop: it doesn’t just store raw data, it stores data that has real meaning attached to it, so it can be unambiguously understood wherever it’s going to be used. … And every databin in the Wolfram Data Drop can use WDF to define a “data semantics signature” that specifies how its data should be interpreted—and also how our automatic importing and natural language understanding system should process new raw data that comes in.The beauty of all this is that once data is in the Wolfram Data Drop, it becomes both universally interpretable and universally accessible, to the Wolfram Language and to any system that uses the language.

[/blockquote]

And while Wolfram is much easier to understand when he’s offering examples instead of making up words like databin and naming ever more bits of services and features after himself, the concepts here are very powerful. Anyone who has ever used the Wolfram Alpha search engine will walk away impressed with the service even if they don’t understand the underlying technology.

The downside, however, is that all of these services fit together like a jigsaw puzzle, and it’s unclear how useful the Wolfram Data Drop is without the use of the Wolfram Language to manipulate the data in it. Although the coolness of the stuff you can do with the data — from assembling heat maps based on image data over time with a simple command, to assembling histograms with time series data — seems almost worth the lock in.

The basic Data Drop service is free and users can create open or private options. Eventually you can run a Data Drop on the Wolfram Cloud or on your own internal cloud. I assume you will have to pay for some of these options and perhaps for privacy and more storage.

The beta service is available for folks to start playing with, and Wolfram writes in his blog post that already some connected devices companies are playing with the services so they don’t have to worry about building their own back-end software and cloud platforms. As he writes:

[blockquote person=”” attribution=””]As throughout the Wolfram Language, it’s really a story of automation: the Wolfram Data Drop automates away lots of messiness that’s been associated with collecting and processing actual data from real-world sources. And the result for me is that it’s suddenly realistic for anyone to collect and analyze all sorts of data themselves, without getting any special systems built. For example, last weekend, I ended up using the Wolfram Data Drop to aggregate performance data on our cloud. Normally this would be a complex and messy task that I wouldn’t even consider doing myself. But with the Data Drop, it took me only minutes to set up—and, as it happens, gave me some really interesting results.

[/blockquote]

Wolfram’s enthusiasm makes the Data Drop sounds far cooler than IBM’s similar efforts or even those from other players, but essentially what is happening is a sea change in automation and it is exciting. Being able to grab real-world data, transfer it up to the cloud (or an on-premise hub) for analysis and immediate visualization) is a powerful tool that’s changing the way factory floors are run, data centers are operated and even how homemade whiskey is distilled. For more on this and other IoT related data topics check out our Structure Data event on March 18 and 19 in New York City, where Amazon Web Service’s Matt Wood will actually talk about building infrastructure for the internet of things.