The only thing that flies faster than time is the progress of technology. Once after lunch, a chip-designing friend excused himself quickly with the deft explanation that Moore’s Law meant that he had to make his chip set 0.67 percent faster each week, even while on vacation. If he didn’t, the chips wouldn’t double in speed every two years.

Now that 2017 is here, it’s time to take stock of the technological changes ahead, if only to help you know where to place your bets in building programming skills for the future.

From the increasing security headache of the internet of things to machine learning everywhere, the future of programming keeps getting harder to predict.

The cloud will defeat Moore’s Law

There are naysayers who claim the chip companies have hit a wall. They’re no longer doubling chip speed every two years as they did during the halcyon years of the ’80s and ’90s. Perhaps -- but it doesn’t matter anymore because the boundaries between chips are less defined than ever.

In the past, the speed of the CPU in the box on your desk mattered because, well, you could only go as fast as the silicon hamster inside could spin its wheel. Buying a bigger, faster hamster every few years doubled your productivity, too.

But now the CPU on your desk barely displays information on the screen. Most of the work is done in the cloud where it’s not clear how many hamsters are working on your job. When you search Google, their massive cloud could devote 10, 20, even 1,000 hamsters to finding the right answer for you.

The challenge for programmers is finding clever ways to elastically deploy just enough computing power to each user’s problem so that the solution comes fast enough and the user doesn’t get bored and wander off to a competitor’s site. There’s plenty of power available. The cloud companies will let you handle the crush of users, but you have to find algorithms that work easily in parallel, then arrange for the servers to work in synchrony.

IoT security will only get scarier

The Mirai botnet that unfolded in this past fall was a wake-up call for programmers who are creating the next generation of the internet of things. These clever little devices can be infected like any other computer, and they can use their internet connection to wreak havoc and let slip the dogs of war. And as everyone knows, dogs can pretend to be anyone on the internet.

The trouble is that the current supply chain for gadgets doesn’t have any mechanism for fixing software. The lifecycle of a gadget usually begins with a long trip from a manufacturing plant to a warehouse and finally to the user. It’s not usual for up to 10 months to unfold between assembly and first use. The gadgets are shipped halfway around the world over those long, lingering months. They sit in boxes waiting in shipping containers. Then they sit on pallets at big box stores or in warehouses. By the time they’re unpacked, anything could have happened to them.

The challenge is keeping track of it all. It’s hard enough to update the batteries in the smoke detectors every time the clocks change. But now we’ll have to wonder about our toaster oven, our clothes dryer, and pretty much everything in the house. Is the software up-to-date? Have all the security patches been applied? The number of devices is making it harder to do anything intelligent about monitoring the home network. There are more than 30 devices with IP addresses connected to my wireless router, and I know the identity of only 24 of them. If I wanted to maintain a smart firewall, I would go nuts opening up the right ports for the right smart things.

Giving these devices the chance to run arbitrary code is a blessing and a curse. If programmers want to perform clever tasks and let users have maximum flexibility, the platforms should be open. That’s how the maker revolution and open source creativity flourishes. But this also gives virus writers more opportunity than ever before. All they need to do is find one brand of widget that hasn’t updated a particular driver -- voilà, they’ve found millions of widgets primed to host bots.

Video will dominate the web in new ways

When the HTML standards committee started embedding video tags into HTML itself, they probably didn’t have grand plans of remaking entertainment. They probably only wanted to solve the glitches from plugins. But the basic video tags respond to JavaScript commands, and that makes them essentially programmable.

That is a big change. In the past, most videos have been consumed very passively. You sit down at the couch, push the play button, and see what the video’s editor decided you should see. Everyone watching that cat video sees the cats in the same sequence decided by the cat video’s creator. Sure, a few fast-forward but videos head to their conclusion with as much regularity as Swiss trains.

JavaScript’s control of video is limited, but the slickest web designers are figuring clever ways to integrate video with the rest of the web page in a seamless canvas. This opens up the possibility for the user to control how the narrative unfolds and interact with the video. No one can be sure what the writers, artists, and editors will imagine but they’ll require programming talent to make it happen.

Many of the slickest websites already have video tightly running in clever spots. Soon they’ll all want moving things. It won’t be enough to put an IMG tag with a JPEG file. You’ll need to grab video -- and deal with the standards issues that have fragmented the browser world.

Consoles will continue to replace PCs

It’s hard to be mad at gaming consoles. The games are great, and the graphics are amazing. They’ve built great video cards and relatively stable software platforms for us to relax in the living room and dream about shooting bad guys or throwing a football.

Living room consoles are only the beginning. The makers of items for the rest of the house are following the same path. They could have chosen an open source ecosystem, but the manufacturers are building their own closed platforms.

This fragments the marketplace and makes it harder for programmers to keep everything straight. What runs on one light switch won’t run on another. The hair dryer may speak the same protocol as the toaster, but it probably won’t. It's more work for programmers on getting up to speed and fewer opportunities to reuse our work.

Data will remain king

After the 2016 U.S. presidential election, word-slinging pundits made fun of data-slinging pundits, suggesting that all of their statistical analysis was an exercise in foolishness. Predictions were dramatically wrong, and the big data people looked bad.

How did they come to this conclusion? By comparing one set of numbers (the predictions) with another set of numbers (the election results). They still needed the data.

Data is the way we see in the internet. Light brings us information about the real world, but numbers tell us about everything online. Some people may make bad predictions based on imperfect numbers, but that doesn’t mean we should stop gathering and interpreting the numbers.

Data gathering, collating, curating, and parsing will continue to be one of the most important jobs for the enterprise. The decision makers need the numbers, and the programmers will continue to be tasked with delivering data in a way that’s easier to understand. This doesn’t mean the answers will be perfect. Context and intuition will continue to have a role, but the need to wrangle data won’t go away simply because a few folks predicted that Donald Trump wouldn’t be elected. This means more work for programmers, as there is no end in sight for our need to build bigger, faster, more data-intensive software.

Machine learning will become the new standard feature

When kids in college take a course called “Data Structures,” they get to learn what life was like when their grandparents wrote code and couldn’t depend on the existence of a layer called “the database.” Real programmers had to store, sort, and join tables full of data, without the help of Oracle, MySQL, or MongoDB.

Machine learning algorithms are a few short years away from making that jump. Right now programmers and data scientists need to write much of their own code to perform complex analysis. Soon, languages like R and some of the cleverest business intelligence tools will stop being special and start being a regular feature in most software stacks. They’ll go from being four or five special slides in the PowerPoint sales deck to a little rectangle in the architecture drawing that’s taken for granted.

It won’t happen overnight, and it’s not clear exactly what shape it will be, but it’s clear that more and more business plans depend on machine learning algorithms finding the best solutions.

UI design will get more complicated as PCs continue to fade

Each day it seems like there is one fewer reason for you to use a PC. Between the rise of smartphones, living room consoles, and the tablet, the only folks who still seem to cling to PCs are office workers and students who need to turn in an assignment.

This can be a challenge for programmers. It used to be easy to assume that software or website users would have a keyboard and a mouse. Now many users don’t have either. Smartphone users are mashing their fingers into a glass screen that barely has room for all 26 letters. Console users are pushing arrow keys on a remote.

Designing websites is getting trickier because a touch event is slightly different from a click event. Users have different amounts of precision and screens vary greatly in size. It’s not easy to keep it all straight, and it’s only going to get worse in the years ahead.

The end of openness

The passing of the PC isn’t only the slow death of a particular form factor. It’s the dying of a particularly open and welcoming marketplace. The death of the PC will be a closing of possibilities.

When the PCs first shipped, a programmer could compile code, copy it onto disks, pop those disks into ziplock bags, and the world could buy it. There was no middle man, no gatekeeper, no stern central force asking us to say, “Mother, may I?”

Consoles are tightly locked down. No one gets into that marketplace without an investment of capital. The app stores are a bit more open, but they’re still walled gardens that limit what we can do. Sure, they are still open to programmers who jump through the right hoops but anyone who makes a false move can be tossed. (Somehow they’re always delaying our apps while the malware slips through. Go figure.)

This distinction is important for open source. It’s not solely about selling floppy disks in baggies. We’re losing the ability to share code because we’re losing the ability to compile and run code. The end of the PC is a big part of the end of openness. For now, most of the people reading this probably have a decent desktop that can compile and run code, but that’s slowly changing.

Fewer people have the opportunity to write code and share it. For all of the talk about the need to teach the next generation to program, there are fewer practical vectors for open code to be distributed.

Autonomous transportation is here to stay

It’s not cars alone. Some want to make autonomous planes that aren’t encumbered by the need for roads. Others want to create autonomous skateboards for very lightweight travel. If it moves, some hacker has dreams of telling it where to go.

Programmers won’t control what people see on the screen. They’ll control where people go and how they interact with the world. And people are only part of the game. All of our stuff will also move autonomously.

If you want dinner from a famous chef downtown, an autonomous skateboard with a heated chamber may bring it to your house. If you want your lawn mowed, an autonomous lawn mower will replace the neighborhood kid.

And programmers can use all of the cool ideas they had during the first internet revolution. If you thought pop-up ads were bad on the internet, wait until programmers are paid to divert your autonomous roller skates past the kitchen vent of a new restaurant. Hungry yet?

The law will find new limits

The ink was barely dry on the Bill of Rights when debates over what it means for a search of our papers to be reasonable began. Now, more than 200 years later, we’re still arguing the details.

Changes in technology open up new avenues for the law. A few years ago, the Supreme Court decided that vehicle tracking technology requires a warrant. But that’s only when the police plant the tracker in the car. No one really knows what rules apply when someone subpoenas the tracking data from Waze, Google Maps, or any of the hundreds of other apps that cache our locations.

What about influencing how the machines operate? It’s one thing to download data, but it’s frightfully tempting to change the data, too. Is it fair for the police (or private actors) to forge documents, headers, or bits? Does it matter if the targets are true terrorists or simply people who’ve parked too long in a no-parking spot without feeding the meter?