In many ways, AI and machine learning are already becoming apparent in our day-to-day lives, especially in commerce. Specifically, retailers are turning to facial recognition systems to help them better target, market and sell their products to an increasingly harried and distracted shopping public.

"I see the recent advances such as deep learning technology for vision as one of the most profound technology leaps I've ever seen or come across," Joe Jensen, Intel's vice president for its internet of things group and general manager of its retail solutions division, told Engadget.

"We've got a partner in China that's developed a vending machine that is just a glass door refrigerator and there's a camera on the front," he continued. The camera not only recognizes the shopper but also tracks the items that they remove from the case and bills their account accordingly. It's essentially a miniature Amazon Go.

"It feels completely seamless from a customer perspective: You walk up, open the door, take what you want," Jensen explained. "You can look at things and put them back, whatever looks good you take, then close the door and just walk away." He points out that the machine costs barely half of what conventional vending machines do yet reportedly sells 40 percent more product than its traditional counterparts because of its ease of use.

Moving forward, Intel hopes to use similar, albeit anonymized, facial-recognition systems to expand this sense of seamlessness to other retail shopping situations. "We should be able to anonymously determine the few things about the shopper. What gender are they? How old are they?" Jensen queried. "With their [observed] size, what do we have in stock right now that we think would be interesting to a shopper like that?"

Associating biometric data with specific accounts isn't nearly as important as using that data to understand the shopper's mood and intentions -- their "shopping mode" -- Jensen argued. He points out that his behavior when shopping with his family (listlessly browsing through various racks of merchandise in an effort to kill time) is very different than when he is shopping for a specific item that he knows he'll purchase (entering the store through the doors nearest the relevant department, walking directly to appropriate racks, and actively looking for items that match his size and style preferences).

Neither of these shopping modes actually need to know who he is specifically in order to extract useful marketing information. "Knowing it's Joe isn't what's really relevant," he said. But understanding the shopper's intention based on their actions and body language could prove invaluable.

Jensen also points out that just a decade ago, this sort of system would have been impossible to deploy. "Trying to recognize a person or how many persons walked by [a security camera], that was a really difficult computer vision challenge 10 years ago," he argued, but those sorts of capabilities are "almost freeware today."

This rapid spread and normalization of advanced computer vision technologies is already having an impact on how we shop and how retailers market their wares. Jensen noted that in May of this year, Walmart quietly began rolling out a fleet of stock-monitoring robots in more than four dozen of its stores nationwide. These autonomous machines cruise the store's aisles, scanning shelves as they pass. Should the drones spot an empty shelf, they alert human employees, who can quickly restock the missing items. Target is currently testing a similar shelf-scanning system in its stores as well.

"I think, as a retailer, the fundamentals of retailing haven't really changed," Jensen figured. "You want to delight your customers, to have products that they want, available to them where they are. I think what we're going to see is AI technologies are going to enable retailers to do the fundamentals of retailing better."

These sorts of advancements are only the tip of the AI iceberg. Even more capable machine-vision systems are already in development thanks to foundational research currently being done by IBM and its partners.

For example, one of the biggest obstacles in creating new AI systems, especially those dealing with visual media, is the need for massive training data sets. However, in November, a team of IBM researchers published their research into a new technique dubbed Delta-encoding.

This methodology allows AI systems to train for "few-shot" object recognition. "Essentially what it's trying to do is to learn and model the sample space around our labeled items," Dr. John Smith, manager of AI Tech for IBM Research AI at the Watson Research Center, told Engadget.

So, say we have a labeled picture of a cat. Rather than feed the system hundreds or thousands more labeled pictures of cats, the Delta-encoder measures the "distances around all the points vested in that category, all the different variants of 'cat'," Smith explained. "As opposed to the representation of the cats themselves."

Once the system learns the Delta model for cats, researchers can introduce an unknown image -- say of a hippopotamus -- and the AI will "synthetically generate new samples around that the new ones that are given, which artificially create the training data for what we want it to learn," Smith said. While this capability is still in early development, it could eventually help researchers and developers build and train more robust AI systems far more quickly than they can today.

But moving fast and rapidly designing AI won't be worth much if researchers and developers don't come to terms with existing issues such as the inherent bias within training data sets. To that end, IBM released in 2018 a pair of image data sets designed specifically to reduce the bias of systems trained on them: one, a million-picture-plus set built to help researchers combat bias in facial recognition; the other, a 36,000-image set with models "equally distributed across skin tones, genders, and ages." Whether the company plans to leverage these data sets in its collaboration with the NYPD, which is reportedly developing an AI-backed facial recognition technology that would allow officials to scan security camera footage for suspects based on skin and hair color, remains unclear.

Art and commerce are just two areas within a galaxy of AI advancements that have taken place in 2018. Artificial intelligence and machine vision are revolutionizing the fields of medicine, transportation, manufacturing, design, science, health care, and law enforcement. This technology is no longer in the realm of science fiction; it's already an integral part of fabric of modern life. So the next time you casually flip off a security camera at the mall, you can be sure that the computer system monitoring it recognizes that gesture and has probably taken offense.