A detailed record of the bits and pieces of our everyday interactions is a valuable commodity. How often do you stop to think about how much you're giving away? (Video first published in 2018)

Even before we're born, government agencies are collecting information which will help them better understand us, and others like us. Whether our parents were on a benefit, for example, our ethnicity, or where we live.

As we go about our daily lives — visiting a doctor, using social services, attending school, paying taxes, travelling — our digital footprint grows.

Advances in technology have enabled better collection and processing of huge amounts of digital information. Increasingly, this data is being used to inform decisions made about us, with computer algorithms.

By the time a child is at school, they've shed enough data for Work and Income to identify them as at risk of long-term unemployment and for the Ministry of Education to assess their eligibility for help getting to and from school.

But these are a fraction of what algorithms are capable of. A stocktake published last month found limited use of algorithms and no use of artificial intelligence (according to the report's definitions of those things) by government agencies.

READ MORE:

* 'Urgent' algorithm stocktake to show how Government uses our data

* Machine learning algorithm is claimed to predict which students will drop out

* Data for sale: the value of our digital lives

Some within the tech community say the stocktake highlights a risk-averse government. But those with experience on the inside say a lack of education, rather than lack of expertise and willingness to innovate, is the real issue when it comes to getting projects over the line.

WHY DID WE NEED A STOCKTAKE?

Essentially a statistical tool for solving a problem or carrying out a task, algorithms are now fundamental in data analysis. Drawing on historical data, these lines of code can model possible outcomes such as the likelihood of a criminal reoffending, a student dropping out of university, or a child being abused. Or, what shows you want to watch on Netflix.

"They have an essential role in supporting the services government provides, and help deliver new, innovative, and well-targeted policies for New Zealanders," the stocktake said.

While there are many advantages to using algorithms, there are also risks; mainly relating to accuracy and bias. Amazon, for example, recently scrapped an automated recruitment process after it was found to favour men's resumes.

"The Government is acutely aware of the need to ensure transparency and accountability as interest grows regarding the challenges and opportunities associated with emerging technology such as AI," then-Government Digital Services Minister Clare Curran said at the time the audit was announced, in May.

Between June and July this year, 14 agencies were asked to respond to a standard series of questions, relating to their use of "operational algorithms" and provide examples to illustrate that use.

The surveyed agencies included the ministries of education, health, justice and social development, as well as the Department of Corrections, Police and Customs.

ALGORITHMS? YES. ARTIFICIAL INTELLIGENCE? NO.

Government Chief Data Steward Liz MacPherson says the stocktake found widespread use of algorithms.

Police use two algorithms that assess the risk of future offending. One calculates the probability that a family violence perpetrator will commit a crime against a family member within the next two years, based on data already held by police such as gender, past incidents of family harm, or criminal history. The other helps to predict whether violence is escalating or likely to occur again at a given scene. The combination of the two models creates an "overall level of concern" for the safety for the people involved.

Work and Income identifies young people at risk of long-term unemployment, so they can be offered support in terms of qualifications and training opportunities. That algorithm is based on data such as demographic information, whether a young person's parents were on a benefit, school history, and notifications to Oranga Tamariki.

And the Ministry of Education uses software to calculate student eligibility for transport assistance and to develop the most efficient routes for school buses.

None of this — or anything else uncovered in the stocktake — counts as AI, according to MacPherson.

But Ali Knott at Otago University's AI and Law in New Zealand Project says some of the tools already being used "can already be thought of as AI systems".

"I'd say anything the report refers to as 'machine learning' algorithms can be considered as 'AI'. A lot of 'operational' algorithms referred to in the report, that make decisions 'based on large / complex data sets', are machine learning algorithms and therefore AI."

When asked if they expected to develop algorithms that rely on AI in the future, eight agencies said yes, five said no (Oranga Tamariki, Ministry of Education, Department of Internal Affairs, Ministry of Justice, and Social Investment Agency), and one was unsure.

JEMMA CHEER Emerging technologies such as machine learning algorithms and artificial intelligence promise to improve public sector service delivery, but they also pose new, ethical dilemmas.

WHY SO CAUTIOUS?

Head of machine learning at Jade Software in Christchurch, Dr Syen Nik, spent more than four years working on algorithms at the Ministry of Social Development (MSD). He left in March this year. When asked why, he says there were many reasons, but one was that "things could have happened quicker". Almost all the models he developed didn't end up getting used.

However, he says the stocktake shows progress.

During his time at the agency, his team came up against issues relating to transparency, human rights, and privacy. "We should have dealt with those issues a lot earlier. It would have made the data scientist jobs much easier."

Now, there are frameworks in place, assessing the ethics of any new service or process. Those frameworks are a sound foundation for current and future models, Nik says.

"We learnt that the business has to drive our work. Internal staff who will be using the output want to know what's happening."

The challenge was getting them to see machine learning as helpful, rather than as a competitor.

"Part of the job was educating [frontline staff], showing them these models can help them, and aren't going to replace them.

"Once they understand the models can help them achieve their performance indicators, they start to use them and like them. In turn, that helps the models improve. So it's a feedback loop. We were starting to see that at the time I left."

MSD is widely regarded as leading the public sector in its use of these technologies. "I get the feeling we were leading, and MSD still is," Nik says. "But the gap is closing, quickly, which is a good thing."

At the end of the day, he says, these models are just serving up recommendations. He finds the current hype about them rather bemusing.

"The use of data has always been there. Maybe at a different level, but it's always been there and people have always used it to make decisions."

Quoting AI boffin Andrew Ng, he adds: "Worrying about machines taking over the world is like worrying about overpopulation on Mars."

HELPING HUMANS

Almost all participating agencies use operational algorithms to inform human decision-making, rather than to automate significant decisions, the stocktake found. "Humans, rather than computers, review and decide on almost all significant decisions made by government agencies."

While these tools are helping agencies deliver better and more efficient services, "there's plenty of scope to lift our game", MacPherson says. "New Zealand has robust systems and principles in place around the safe use of data, but as techniques become more sophisticated we must remember to keep the focus on people and make sure the things we are doing are for their benefit."

The report's recommendations include maintaining human oversight, involving those who will be affected, promoting transparency and awareness, regularly reviewing algorithms that inform significant decisions, and monitoring adverse effects.

"Even the best algorithms can perpetuate historic inequality if biases in data are not understood and accounted for," the stocktake said. Yet, it continued: "Few agencies reported any regular review process for existing algorithms to ensure they are achieving their intended aims without unintended or adverse effects."

When asked about this, MacPherson says while few agencies "explicitly referenced a review process for algorithms there are a range of different safeguards and assurance processes that they did specify". Those included getting advice from experts, or employing a dedicated data steward.

The report found agencies could also benefit from a fresh perspective by looking beyond government for privacy, ethics, and data expertise. This could be achieved by bringing together a group of independent experts that agencies could consult for advice and guidance.

WHAT NOW?

No decisions have been made about how the Government will respond to the report's recommendations, MacPherson says.

Evelyn Wareham, Chief Data and Insights Officer at Ministry of Business, Innovation and Employment, says as one of the government's largest and most complex policy and operational agencies, the ministry relies on good quality evidence to inform decisions.

"The data received by algorithms provides insights on a wide variety of operational and policy decisions." "Activity" is underway to ensure "transparency, accountability and best practice for algorithm use across MBIE", she says, without offering specifics.

Government chief digital officer and chief executive of the Department of Internal Affairs, Paul James, says the Government is working with "communities, interest groups, business and other nations to ensure we are developing and making best use of tools to benefit New Zealanders".

"Take-up of technologies is relatively advanced in New Zealand but AI applications are still emerging."