New intelligence can be added to mobile devices like the iPhone, Android devices, and low-power computers like Raspberry Pi with Facebook's new open-source Caffe2 deep-learning framework.

Caffe2 can be used to program artificial intelligence features into smartphones and tablets, allowing them to recognize images, video, text, and speech and be more situationally aware.

It's important to note that Caffe2 is not an AI program, but a tool allowing AI to be programmed into smartphones. It takes just a few lines of code to write learning models, which can then be bundled into apps.

The release of Caffe2 is significant. It means users will be able to get image recognition, natural language processing, and computer vision directly on their phone. That task is typically offloaded to remote servers in the cloud, with smartphones then connecting to it.

Mobile devices are getting more artificial intelligence capabilities. More phones are being bundled with Amazon's Alexa and Google Assistant, while Apple's Siri has been a staple in the iPhone for years. Samsung's Galaxy S8 smartphones are due to get the Bixby voice assistant, which should make using the handsets much easier.

Caffe2 can work within the power constraints of mobile devices. It works with mobile hardware to speed up AI applications and create neural networks.

Caffe2 takes advantage of the computing power of new mobile hardware to speed up deep-learning tasks. For example, in smartphones, Caffe2 will harness the computing power of Adreno GPUs and Hexagon DSPs on Qualcomm's Snapdragon mobile chips.

The new machine-learning framework succeeds Caffe, which excelled at image recognition. Caffe was mainly used for machine learning in data centers, and Caffe2 is a complete overhaul so it can work on mobile devices.

"We're committed to providing the community with high-performance machine learning tools so that everyone can create intelligent apps and services," Facebook said in a blog entry on the Caffe2 website.

Caffe2 could also be used to create chatbots. The Caffe2 website has some pre-trained models that could be used to create learning models.

Before this announcement, it was already possible to create deep learning models on mobile devices through Google's TensorFlow. TensorFlow could be ported to devices like drones to add image recognition to cameras. Like with TensorFlow, the code in Caffe2 can be easily ported between multiple environments.

The open-source framework is also a lot faster than the original Caffe. Benchmarks by Intel, Qualcomm, and Nvidia boast significant speed boosts compared to Caffe and other machine-learning frameworks.

There are other machine-learning frameworks like Theano and Microsoft's Cognitive Toolkit (CNTK). Companies deploying machine learning sometimes mix and match frameworks depending on applications.

But the major appeal of Caffe2 still remains tied to mega data centers. For example, servers with GPUs are used to create the rich data sets needed for image recognition. Image recognition involves the classification and labeling of pixels, which can help identify an object accurately. The learning model gets more accurate as more data is fed. That's especially handy in applications like self-driving cars, which need to identify objects to avoid collisions.

Nvidia claims that Caffe2 will be significantly faster than on its high-end GPUs than the original Caffe. Some Nvidia GPUs designed for machine learning have low-level floating computing capabilities, instrumental in creating a powerful neural network to make accurate assumptions.

Facebook is expected to share more details on Caffe2 on Wednesday during the F8 conference being held in San Jose, California.