The usual process involves researchers crafting scripts to define every little movement an autonomous sea rover makes, but drafting those fine-grained plans is a ton of work, especially if these engineers have to coordinate a group of machines navigating the seas in tandem. MIT's new system instead allows them to set higher-level commands -- something like "explore this sunken ship for four hours," or whatever the code equivalent would be -- leaving the rovers to react to things they find along the way and switch up priorities when it takes longer to complete some objectives. That might not sound thrilling at first, but it could mean big, big things for the future of oceanic exploration. Giving smarter machines a shot at exploring the oceans without direct, highly involved human supervision means schools and research institutions could more easily dump these drones into the water and sift through the data they return.

In a way, these researchers are thinking of their deep sea tools as something a little closer to human than before. After all, when you get up to grab a book off a shelf, you're acting on a high-level directive and not laboriously thinking about standing up, walking x steps to a bookcase, lifting your arm, opening your hand and so on. Turns out, this sort of faith in mechanical cognition might be new to the seas, but it has its roots in the stars -- Williams worked on a similar system for NASA which worked just fine when it was baked into a space probe exploring an asteroid. For now, how exactly the system works (beyond "algorithmically") is still mostly a secret, but MIT plans to show off its inner workings at a conference in June. You'd better get that popcorn ready.