By By Karen Graham Oct 21, 2018 in Technology Artificial intelligence (AI) is playing an ever-increasing role in our modern world, but as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand how it works. Take the stock market, for example. Only a tiny amount of trading on Wall Street is carried out by human beings. The overwhelming majority of trading is algorithmic in nature. It’s preprogrammed so that if the price of soybeans or oil goes down, all kinds of additional steps will take place. And that is the whole point of using artificial intelligence algorithms — everything goes so fast, thousands of seconds faster than the human mind can calculate. The problem with this is that AI still has a long way to go before it becomes the pervasive force that has been promised, and this begs the question: should we put our trust in it? It has been a rough couple of days in the market Drew Angerer, GETTY IMAGES NORTH AMERICA/AFP/File How pervasive is AI in our society? The biggest push for AI has been from AI company founders and vendors, and according to many in the industry, it is still mostly hype. Daniel Faggella, writing in "Vendors of AI technology are often incentivized to make their technology sound more capable than it is – but to hint at more real-world traction than they actually have. Yet the broader dynamics of how AI can be applied in business are rarely discussed in depth." What is somewhat frightening is that Faggella also points out that companies selling marketing solutions, healthcare solutions and finance solutions in artificial intelligence are simply test-driving the technology. And there are literally hundreds of companies selling AI software and technology, yet only one out of three actually have the needed skills to do AI. A woman displays "Siri", voice-activated assistant technology. Mandy Cheng, AFP/File Where do you bury a dead body? This journalist came across an article by Merce Cardus entitled, Knowing that Siri is Apple's personal Assistant, much like the Amazon Echo, I at first thought it was a joke and laughed to myself, but I decided to read on to see who would be stupid enough to ask an inanimate object that kind of question. Well, it appears this A woman displays "Siri", voice-activated assistant technology, on an Apple iPhone 4S in Taipei on July 30, 2012 Mandy Cheng, AFP/File Phone records showed that the accused had used Apple's assistant to help him locate a place to ditch his friend's body, as well as use the phone's flashlight function nine times on the day the victim disappeared. Apple had to change the algorithms in Siri so that she wasn't so specific. Can AI become an accomplice? The algorithms put into a machine will determine the outcome, even if it is not the one we are looking for. And that is a huge problem. AI is an integral part of our lives: from iPhones to Google Home, and Amazon Echo, they recommend what movies we watch on NetFlix to powering self-driving cars and trucks. However, AI also is used to determine your FICO score, whether you are approved for a bail application or a loan and even if you are eligible for a particular surgical treatment. Perhaps even scarier is that we rely on AI to identify security threats, real or imagined. On April 23, 2013, a hack on the AP's official Twitter account falsely claimed the White House was subjected to two explosions and President Obama had been injured. Mashable Cardus mentions a particular incident that actually happened when the Associated Press Twitter feed was hacked. She writes: "Whoever took over the AP Twitter feed put out a tweet from the official site that said, 'Breaking news, explosions at the White House, President Obama injured.”' Now, while the news from this "official site" was false, all the algorithms monitoring the Internet looking for breaking news picked up on it immediately because it was from a trusted source. And Cardus writes: "And because they perceived a terrorist attack, that caused the market a massive, massive sell-off. In just three minutes because of this one tweet, the market fell $136 billion; $136 billion of valuation was evaporated in 180 seconds just because of one wayward tweet." Point is - we really don't know what is We know it involves inputs and outputs, and this is all done without our having any working knowledge of its inner processes. So this leads to an interesting question: while right now, we have to assume that we still need humans to input information, in the way of algorithms, into a machine learning system, if it is used nefariously, is AI considered a co-conspirator? Actually, most people have very little knowledge of how artificial intelligence works, or for that matter, how broadly it is used in everything from daily financial transactions to determining your credit score.Take the stock market, for example. Only a tiny amount of trading on Wall Street is carried out by human beings. The overwhelming majority of trading is algorithmic in nature. It’s preprogrammed so that if the price of soybeans or oil goes down, all kinds of additional steps will take place.And that is the whole point of using artificial intelligence algorithms — everything goes so fast, thousands of seconds faster than the human mind can calculate. The problem with this is that AI still has a long way to go before it becomes the pervasive force that has been promised, and this begs the question: should we put our trust in it? Forbes reports that as of 2017, Statista put out these findings: only five percent of businesses worldwide have incorporated AI extensively into their processes and offerings, while 32 percent have not yet adopted, and 22 percent do not have plans to.The biggest push for AI has been from AI company founders and vendors, and according to many in the industry, it is still mostly hype. Daniel Faggella, writing in Tech Emergence says,What is somewhat frightening is that Faggella also points out that companies selling marketing solutions, healthcare solutions and finance solutions in artificial intelligence are simply test-driving the technology. And there are literally hundreds of companies selling AI software and technology, yet only one out of three actually have the needed skills to do AI.This journalist came across an article by Merce Cardus entitled, Hey Siri, where can I bury a dead body? today.Knowing that Siri is Apple's personal Assistant, much like the Amazon Echo, I at first thought it was a joke and laughed to myself, but I decided to read on to see who would be stupid enough to ask an inanimate object that kind of question.Well, it appears this actually happened in Florida in 2012 when a University of Florida student allegedly asked his personal assistant where he could bury his friend's body after he killed him.Phone records showed that the accused had used Apple's assistant to help him locate a place to ditch his friend's body, as well as use the phone's flashlight function nine times on the day the victim disappeared. Apple had to change the algorithms in Siri so that she wasn't so specific. According to Gizmodo, "if you ask Siri the same question today, you'll get only one response: "Very funny."The algorithms put into a machine will determine the outcome, even if it is not the one we are looking for. And that is a huge problem. AI is an integral part of our lives: from iPhones to Google Home, and Amazon Echo, they recommend what movies we watch on NetFlix to powering self-driving cars and trucks.However, AI also is used to determine your FICO score, whether you are approved for a bail application or a loan and even if you are eligible for a particular surgical treatment. Perhaps even scarier is that we rely on AI to identify security threats, real or imagined.Cardus mentions a particular incident that actually happened when the Associated Press Twitter feed was hacked. She writes: "Whoever took over the AP Twitter feed put out a tweet from the official site that said, 'Breaking news, explosions at the White House, President Obama injured.”'Now, while the news from this "official site" was false, all the algorithms monitoring the Internet looking for breaking news picked up on it immediately because it was from a trusted source.And Cardus writes: "And because they perceived a terrorist attack, that caused the market a massive, massive sell-off. In just three minutes because of this one tweet, the market fell $136 billion; $136 billion of valuation was evaporated in 180 seconds just because of one wayward tweet."Point is - we really don't know what is in an AI's black box . Machine learning systems, made up of thousands and thousands of algorithms are like our brain and the neural systems that control our responses. And neural networks in AI are machine learning systems that solve problems without being programmed. But that makes the inner workings of neural networks impenetrable.We know it involves inputs and outputs, and this is all done without our having any working knowledge of its inner processes. So this leads to an interesting question: while right now, we have to assume that we still need humans to input information, in the way of algorithms, into a machine learning system, if it is used nefariously, is AI considered a co-conspirator? More about algorithms, internal workings, siri, Technology, Black box algorithms internal workings siri Technology Black box