You didn’t have to be a data structure nerd to get through part 1 on the graph of time-lines and you don’t need to be an algorithm nerd for this one.

If you happen to enjoy algorithms, you’ll find this one mind-bogglingly simple. But, like with data structures, what matters most is how the algorithm benefits us.

This algorithm, which I’ll call Contextual Prediction, is the method by which relevant and useful predictions are extracted from a graph of timelines. It is not the final piece to this puzzle, though. The third key piece is a user interface that I’ll describe in part 3. However, if you look carefully with an open mind, you’ll see it here already.

Quick overview of the underlying data structure

A graph of time-lines is a graph where each node is a unique time-line, with interior nodes aggregating the timelines of the leaf nodes. Like this:

This structure enables good, easy to make predictions because it focuses on “clean” timelines. That is, timelines each with events or actions of the same type and at the same level of abstraction. Outer nodes represent more concrete types of activity while inner nodes represent more abstract types of activity. This layout maps neatly to the contextual structure underlying human behavior.

I narrowed the purpose of this graph to be the process of recording and structuring observations of our own behavior to capture the value of predicting our own futures. This is important because when our own lives are not going as smoothly as we’d like, our affliction can be viewed as a relative shortage of predictive ability. When we improve our ability to predict, many of our problems become much easier to identify, solve, and can even be avoided entirely.

Extrapolating from individual time-lines

We begin with a timeline of events. It doesn’t matter what time scale the events are on or what type of activity the timeline represents. There will have to be at least two events, with more being better.

A simple timeline

The idea is to approximate when the next event might occur. This process is straightforward when the events are all of the same type and at the same level of abstraction.

When is the next event on this timeline likely to occur?

The simplest method is a linear extrapolation, which is to calculate the average interval between the most recent events. (10 + 14 + 8) / 3 = 10.7 in this case. But there’s no limit to the precision, complexity, or scope of this extrapolation. Other timelines in the graph and also the distant past of the same timeline may even contribute. It’s like a simplified form of linear regression, one of the fundamentals of machine learning.

The next event is extrapolated from intervals between past events

Of course, the extrapolated event is only an approximation. The next event might happen right away, might take much longer than expected, or may never reoccur. Regardless of what eventually happens, the expectation is well represented by a probabilistic distribution. A Gaussian or normal distribution based on the event frequency is a fine place to start, and this can be improved later as specialized knowledge is gained about the type of activity.

View the extrapolated event as a gradient of probabilities

Then, as time marches on, the likelihood of re-occurrence “now” or “soon” is recalculated. The next event becomes the new reference point whenever it occurs and the extrapolation process repeats.

As “now” changes, so does the probability of the event re-occurring

Through this process, each timeline produces a numeric score such as between 0 and 1. In my implementation I treat 0.0 as “it just happened”, 0.5 as “it’s due to reoccur around now” and 1.0 as “it’s well overdue”. That’s not set in stone though, do what works for you.

Involving the graph

So through the above process, each timeline gets reduced to a score between 0.0 and 1.0. Now we can traverse the graph, calculating the scores for each timeline to produce a sorted list.

The question is, at which node should the traversal begin and how deep into the graph should it go? Naturally that depends on the goal. For the sake of simplicity, I’ll skip the depth question and always traverse out to the timelines, ignoring the possibility of cycles. The question of where to begin, however, is more important.

The global view

Let’s say we always traverse from the graph’s root, keeping the “Life” node as our focus. This gives us a global view of upcoming events. The score of every timeline, no matter how distant, detailed, near, or abstract, would be included in the list. Let’s work with a simple example graph.

It’s a global view when the root is the focus

The global list might look like this:

Truck -> Add Oil 0.80

Work -> Email Status Update 0.65

Dinner -> Pasta 0.60

Work -> Server Maintenance 0.50

Fun -> Volleyball 0.46

Truck -> Check Tire Pressure 0.43

Work -> Testing 0.25

Dinner -> Rice & Beans 0.20

Truck -> Oil-change 0.15

Fun -> Mountain Biking 0.05

The problem with the global view is that its predictions would be mostly irrelevant and unappreciated. Human behavior — the domain we’re working within — is highly contextual. To take a global view is to discard the key information provided by context.

The contextual view

When I’m focused on work — physically and mentally inside the work context — I don’t care that I need to add oil to my truck or that I’m likely to have pasta for dinner. Nor do I much care that I’ll probably play volleyball in the evening or that it’s garbage day tomorrow. Those predictions may all be accurate but they are not relevant when I’m working.

I’d appreciate work-related predictions though: send out a status update soon, the server is due for a crash, time to stand up for a break, and so on. And it would be wonderful if the predictions were tailored to both the specific task I am working on and the depth of that focus.

Instead of beginning our traversal from the root, we can begin from the “Work” node for this example, making it our focus. By shifting focus like this it makes sense to record the graph distance between the focus and each of the other nodes. Graph distance is meaningless in the global view as all nodes are always included.

Shifting focus to the “Work” node means that all of the graph distances change, being that they are relative to the current focus.

Graph distances relative to the Work node

The result is that now we can adjust scores of distant nodes downward, as simply as dividing each score by its graph distance. The resulting list is much more contextually relevant.

Old Divisor New

Work -> Email Status Update 0.65 1 0.65

Work -> Server Maintenance 0.50 1 0.50

Truck -> Add Oil 0.80 3 0.27

Work -> Testing 0.25 1 0.25

Dinner -> Pasta 0.60 3 0.20

Fun -> Volleyball 0.46 3 0.15

Truck -> Check Tire Pressure 0.43 3 0.14

Dinner -> Rice & Beans 0.20 3 0.07

Truck -> Oil-change 0.15 3 0.05

Fun -> Mountain Biking 0.05 3 0.02

Notice that the work-related concerns cluster near the top of the list. The focus can be shifted to any node — effectively switching contexts — and the sort order will remain contextually relevant.

Although I’ve only outlined how a simple graph might work, keep in mind that a graph of timelines supports any graph size and degree of timeline detail. The algorithm works the same way with large graphs, although traversal distance limits start to make sense at large scales.

The sorted list is of event probabilities and offers a quantified view into a context-aware future. If you think of the kind of information an advertiser would want, a dynamically updated list like this would be near their ideal. If my oil-change probability happened to rise to 0.7 or so (overdue), I might jump at the first reasonable offer.

Since my “openness” to an oil change is now quantified and digitized, I could even publish it, along with a set of criteria, via API into the local market and have my need filled automatically. Much like an equity order into the stock market. But only on my terms; this information is too valuable to be public.

Next up, the complementary user interface

Naturally, no sane person is going to manipulate a graph of timelines or run this kind of algorithm manually. These ideas only make complete sense when there’s a UI to tie it all together. A UI that enables simple timeline input, direct graph manipulation, and supports the idea of focus shifting. Ideally in a streamlined, unified package.

Will it look and feel different than the UI’s we’re used to? Yes, that’s a fair assumption to make. Our applications with their familiar UI’s don’t really do predictions for our benefit. They’re not based on the kind of simplicity that is required.

Given the deluge of information our computers and applications now spew, one could argue that they’re barely working for us at all. They’re enabling predictions about our behavior alright, but not for our benefit. The predictions are for the benefit of the glorified advertising networks and information-brokers that modern technology companies have become.

So when talking about achieving what is basically the inverse of what we’re all used to, it makes sense that the resulting UI will also be the inverse. But then, if it delivers on its promise, who gives a shit? It would be insane to expect more of the same to suddenly produce the opposite result.