We're still a long way away from Iron Man's digital butler J.A.R.V.I.S., but Facebook, Google and other tech giants are racing to create products that incorporate artificial intelligence and can better "understand" the nuances of human speech, emotion and culture.

Facebook on Wednesday introduced DeepText, which they describe as a deep learning-based text understanding engine that can "understand with near-human accuracy the textual content of several thousands posts per second, spanning more than 20 languages."

"To get closer to how humans understand text, we need to teach the computer to understand things like slang and word-sense disambiguation," the Facebook engineers explained. It's already being tested in the company's Messenger app.

Also on Wednesday, Google shared a 90-second piece of music written by its Magenta software -- proof of progress on its mission to teach a robot about art. Douglas Eck, a research scientist at Google developing the artificial intelligence, said in a blog post that the goal is to also help human artists and engineers do their own machine-learning projects.

“We believe that the models that have worked so well in speech recognition, translation and image annotation will seed an exciting new crop of tools for art and music creation,” Eck said.

It's not as futuristic as it seems. In fact, artificially intelligence is already used widely in digital assistants like Apple’s Siri or her rivals built by Google, Microsoft and Facebook, as well as in smart-home products like Google Home and Amazon Echo. Google CEO Sundar Pichai said on Wednesday at Recode's Code conference that this competition was friendly, but he also argued Google Now is better than others on the market because “we've been doing it longer."

But business is the bottom line, and artificial intelligence is very a promising pay day. The International Data Corporation estimates that the market for machine learning applications will reach $40 billion by 2020, and will generate more than $60 billion worth of productivity improvements for businesses.

Researchers around the globe are already working on projects including how systems react to movies or make difficult ethical choices. Engineers at the Leibniz University of Hannover in Germany are developing a robot nervous system to teach robots to feel pain as a reflex to avoid damage. While scientists are years away from creating "Blade Runner" or "Terminator"-like machines that can easily outsmart or betray humans, critics are already voicing concerns.

Mike Gualtieri, a principal analyst at Forrester Research says “we have to trust humanity” to develop artificial intelligence responsibly, and wonders, “What if they learn wrong?”

“AI research is ultimately the study of ourselves,” Gualtieri says. “It's frightening to think that a machine could be programmed to feel pain, because it raises the question of how will the machine learn to respond to pain. Will the machine try to eliminate the perceived cause of the pain even if it were a human? Or would it learn to shut down to avoid pain?”

Paul Asaro, a co-founder of the International Committee for Robot Arms Control, says the advance of artificial intelligence and robotics is generally good for society, but their increasing use raises social and ethical issues. Machine intelligence is still not very advanced compared with a human’s ability to improvise, so a critical problem “is that these AI systems will be expected to do more than they are capable of,” he says.

“The result will be that people avoid responsibility for what those systems do,” he says.