(Photo : google images)

Artificial intelligence or AI is being adopted by governments around the world to improve the lives of their citizens. In the United States alone, a new presidential executive order was issued and is urging state and federal agencies to explore ways to use AI technologies.

In 2015, the United States Department of Homeland Security created an AI system called Emma, it is a chatbot that can answer numerous questions posed to it in English without needing to know what "her" website called the "government speak" which are the acronyms and terms used in government agency documents.

By 2016, DHS reported that Emma was already helping the government agency to answer almost half-million questions every month and this allows DHS to handle more inquiries than it had before they used the AI. It also allowed them to let their human employees to spend more time in helping people with complicated queries that are beyond the AI's abilities. This conversation-automating AI has now been used by government agencies in cities and countries around the globe.

In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist.

The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background.

Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies.

As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The best way to avoid this from happening is by teaching the public about the appropriate use of AI through conversation between citizens, public administrators and scientists to help determine when and where it is appropriate to use these powerful tools and when it is not appropriate to use them.