The key message from a new policy report from the University of Manchester is that future-state artificial intelligence needs to be socially responsible. This is because the current path of development of new artificial intelligence technology has been shown, through experimentation, to contain bias. For example, Joy Buolamwini has challenged inherent ethnic and female bias in many facial recognition systems.This issue of bias becomes serious when used in the business setting and by governments, such as when assessing welfare claims. Here many developed systems can be discriminatory.Bias can also be more subtle. Here the social technologist Kriti Sharma raised the issue of the first wave of virtual assistants reinforcing sexist gender roles in that those assistants that execute basic tasks (like Apple’s Siri, Amazon’s Alexa) were given female voices while, around the same time, the more sophisticated problem-solving systems (such as IBM’s Watson, Microsoft’s Einstein) were given male sounding voices.The University of Manchester report calls on policymakers to do more to ensure that the development of artificial intelligence going forwards is both democratic and socially responsible. The report " On AI and Robotics: Developing policy for the Fourth Industrial Revolution ", was led by Dr. Barbara Ribeiro of Manchester Institute of Innovation Research.Launching the report, the academic said : "Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning."She makes several points in the following video:The report is intended to assist employers, regulators and policymakers in understanding the potential effects of artificial intelligence in areas such as industry, healthcare, research and international policy