Artificial intelligence research – for at least the foreseeable future – is going to help humans, not harm them.

However, fears about artificial intelligence (AI) and the development of smart robots that have made headlines recently could slow research into an important technology.

That's the thinking from AI researchers and industry analysts attending the AAAI-15 conference in Austin, Texas, this week.

"People who are alarmed are thinking way ahead," said Oren Etzioni, CEO of the Allen Institute for AI. "The thing I would say is AI will empower us not exterminate us… It could set AI back if people took what some are saying literally and seriously."

AI risks – whether current or far in the future – were the topic of many conversations at the annual AI conference since a scientific and high-tech luminaries recent raised red flags about building intelligent machines.

Early in December, renowned physicist Stephen Hawking said in an interview with the BBC that the development of "full artificial intelligence" could bring an end to the human race.

While Hawking said artificial intelligence today poses no threat to humans, he added that he worries about the technology advancing to the point that robots and other machines could become more intelligent and physically stronger than people.

Those statements sent ripples across the Internet since they came about a month after Elon Musk, CEO and co-founder of SpaceX and electric car maker Tesla Motors, created headlines when he said artificial intelligence is a threat to humanity.

"I think we should be very careful about artificial intelligence," Musk said at an MIT symposium in October. "With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."

John Bresina, a computer scientist in the Intelligent Systems Division at NASA's Ames Research Center, said he was surprised to hear Musk and Hawking's statements about AI.

"We're in control of what we program," Bresina said, noting it was his own opinion and not an official NASA statement. "I'm not worried about the danger of AI… I don't think we're that close at all. We can't program something that learns like a child learns even – yet. The advances we have are more engineering things. Engineering tools aren't dangerous. We're solving engineering problems."

While scientists and analysts at the conference said they're not fearful of the intelligent systems being built today, there was discussion about the issue. Ethics in artificial intelligence was among the topics of workshops and sessions held during the six-day conference.

Conference attendees aren't the only ones who have been talking about the ethics and potential perils of creating artificially intelligent systems.

Scientists at Stanford University have begun to explore what intelligent machines will mean for people's every day lives, as well as for the economy, in another 20, 50 or 100 years.

WPI Sonia Chernova is director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute.

Sonia Chernova, an assistant professor of computer science at Worcester Polytechnic Institute, said she doesn't see any foundation for the alarmist statements recently made about AI. However, she said scientists should discuss the effects that future advances in the technology could have on society.

"There are a lot of people thinking about this," Chernova said. "It's not like we're blindly forging ahead. We are taking this seriously but, at the same time, we don't feel there's any kind of imminent concern right now." She added that part of this fear of robotics appears to be a cultural issue.

"If I say something to an American about being a roboticist, they inevitably say, 'Oh, you're going to take over the world!' " said Chernova. "If I'm in Japan, I get a different response. They say, 'That's fantastic. You're helping people. I can't wait to have a robot helping around the house.' In the West, movies and video games -- our culture -- promote the idea that robots are dangerous."

Lynne Parker, a division director in Information and Intelligent Systems with the National Science Foundation, agreed that Americans and others in the West probably have a greater fear of robots because of movies like The Terminator and the TV show Battlestar Galactica.

"I think Hollywood has contributed to this," she said. "The Japanese society has embraced robots. They've really embraced it as a culture."

Parker was quick to point out that we are still far away from having any kind of intelligent machines that we need to fear.

"The robotics people know how far we are from getting anything that works reliably," she said. "It doesn't mean that technology developers don't have a responsibility to try to use these technologies in responsible ways. We need to have conversations about what to do with this. It's our responsibility."

With advances in AI research, a machine today can look at a picture and identify an object, such as a cat or a bottle. However, Parker noted that the machine still doesn't have any understanding of what a cat or a bottle is.

"For robots to become conscious of what they're doing and reason in a way to overcome us, that's really science fiction," she said. "Robotics is very far from having any consciousness and understanding of what it's doing, but we're still responsible to discuss the potential harm and see what we can do to mitigate it."

Etzioni said long-term attention to the future of AI is appropriate but he has concerns that headline-grabbing and fearful statements about the technology could slow research or the funding needed for research.

"When you have a one-liner like. 'AI is unleashing a demon,' that's more about evoking an emotion than starting a discussion," he added. "It's evoking a primal fear. That goes all the way back to Frankenstein and Mary Shelley. We've always had some fear about the machine and our role in the universe. We can be terrified or we can analyze it."