The DGX-1 is a $129,000, desktop-sized box with eight NVIDIA Tesla P100 GPUs, 7TB of SSD storage and two Xeon processors. That nets 170 teraflops of performance, equivalent to around 250 servers. Moreover, the parallel architecture is ideal for OpenAI's deep learning algorithms. NVIDIA said it cost around $2 billion to develop.

OpenAI, founded to ensure that machines don't destroy us, will use the DGX-1's extra power to read the nearly 2 billion Reddit comments in months, rather than years. It'll also help it learn much faster and speak (or swear) more accurately. "You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact," Karpathy said.

Best of all, the researchers won't need to do much to improve areas like language learning and image recognition. "We won't need to write any new code, we'll take our existing code and we'll just increase the size of the model," says OpenAI scientist Ilya Sutskever. "And we'll get much better results than we have right now." All of that sounds interesting, but I'm not sure how I feel about machines generating hot takes in milliseconds.