Download Source Code More Github links and source code info below under "Beta Version 0.2"

Layperson's Description:

Grassland is peer-to-peer networked, AI software that takes video camera footage and remembers and stores the individual features, movements and geo-locations of every person, vehicle, building and relevant object as a virtual, 3D simulation in real time. The software shares only the relevant 3D data with the rest of the network so they can all have a complete simulation of what they're all seeing around the world.

It was originally designed because the Université de Montréal's AI lab[8] asked (April 13, 2018; ~3:20 PM) if I could help them solve the problem of giving AI an intuitive understanding of the real world, due to some previous discoveries I'd made in this field[7]. The problem intrigued me so I went back to my home in Ottawa and a month later delivered a solution based on an "extension" of some overlooked mathematics equations; however, it was clear that it was better in every respect to turn the software into a public utility by making it open source and peer-to-peer networked not only because it would grow much faster and have greater adaptability (it can thrive and learn from censorship) but so that anyone could add cameras to it with no limits or restrictions and build apps that can query the network's data API for information. Letting any internet connected object "walk through" and "experience" people's entire lives (akin to how humans with eidetic memories experience life) or the "life" of any other object or building from all perspectives and timestamps at once without even needing to be there.

Although the data is all the same for everyone, it's just a model of the real world, anyone can build both public and private applications for people specifically based on what problems they want to solve using that data. It could be for finding lost children, helping a hedge fund model a retail store or factory's performance to predict quarterly earnings, giving an insurance company the tools to model and assess their risk portfolio or helping a city solve their traffic and emergency response problems.

Technical Description:

Grassland is a self-organizing, self-correcting and self-financing, P2P network of robot vision software that efficiently scans any 2D video feed from any single-viewpoint camera to generate a compressed, searchable, timestamped, real-time, 3D simulation of the world. The network's game theory based mathematical framework exhibits positive sensitivity to stressors; e.g. censorship makes it stronger, trustlessly learning the socioeconomic, domestic and cardiovascular (thermal cameras) nature of political rivals via a prisoner's dilemma.

Grassland is open-source and isn't owned or controlled by anyone. It's politically stateless and anyone can take part. Every node in the network has a permissionless and public API giving any external application or computer free access to Grassland data across the entire network, letting any internet connected object trustlessly internalize, understand and interact intuitively with both past and present states of the real world, digitally recreate or respond to even the tiniest changes taking place around the globe, from a butterfly flapping its wings in Calgary, to the lip-read conversations of pedestrians in Buenos Aires, to understanding that a motorcycle is signaling a left turn in Beijing all at zero cost and in real-time. While the combined work of the network makes it computationally intractable for nodes to submit fake data (see proof-of-work description below).