The humanoid robot Pepper of the American company CloudMinds is seen shaking hands with a visitor during the MWC2019. Paco Freire | SOPA Images | LightRocket | Getty Images

Concerns over the artificial intelligence's ability to spread disinformation may be overblown, according to one expert. "I think it's a little overhyped," Richard Socher, chief scientist at software firm Salesforce, told CNBC in an interview during his trip to Singapore. That's because humans are already adept at creating fake news without the help of algorithms, Socher said. Furthermore, fake news usually has some sort of "agenda" behind it — something that AI inherently lacks, he added. AI can already be used to create fake, superimposed images onto another person in videos. There have been rising concerns that such videos, also known as "deepfakes," could be used to spread misinformation.

Now that AI really works, we need to think about the impact that it has on people. Richard Socher chief scientist at Salesforce

Socher, an expert in natural language processing in the field of machine-learning, said current attempts at AI-generated text are often "not as coherent" when compared to humans who would write fake news. "I honestly think that people are much better at creating fake text news ... than AI will be for a long time," Socher said. "There are already people who look at being able to classify pictures or text and see if they were created by ... an algorithm or not," he said. "As with most things in security, there will be measures and counter-measures and an ongoing sort of race, but of all the concerns I have, this is not as high on my priority list."

Impact on bias and jobs

Instead, Socher spoke about the potential bias in AI as well as its possible impact on jobs — topics he said were currently at the forefront of discussions about AI ethics. "Now that AI really works, we need to think about the impact that it has on people," he said. "In some ways, AI holds a mirror in front of us," he said, adding that the algorithm essentially processes data generated from human actions before distilling them and automating processes. As a result, it is important to create AI algorithms that make "unbiased decisions that don't discriminate," he said.