AI-powered government finances: making the most of data and machines – Global Government Forum

Photo by Karolina Grabowska via Pexels

Governments are paying growing attention to the potential of artificial intelligence the simulation of human intelligence processes by machines to enhance what they do.

To explore how public authorities are approaching the use of AI for tasks related to public finances,Global Government Fintech the sister title of Global Government Forum convened an international panel on 4 October 2022 for a webinar titled How can AI help public authorities save money and deliver better outcomes?.

The discussion, organised in partnership with SAS and Intel, highlighted how AI is already helping departments to deliver results. But also that AI remains very much an emerging and, to many, rather nebulous field with many hurdles to clear before widespread use. Discussions of artificial intelligence often bring up connotations of an Orwellian nature, dystopian futures, Frankenstein said Peter Kerstens, advisor, technological innovation & cyber security at the European Commissions Financial Services Department. That is really a challenge for positive adoption and fair use of artificial intelligence because people are apprehensive about it.

Like most technology-based areas, it is a field that is also moving very quickly. If the last class you took in data science was three years ago, its already dated, cautioned Steve Keller, acting director of data strategy at the US Treasurys Bureau of the Fiscal Service, in his own opening remarks.

Kerstens began by describing the very name artificial intelligence as a big problem, asserting that AI is neither artificial nor is it particularly intelligent at least not in a way that humans are intelligent.

A better way to think about artificial intelligence and machine learning is self-learning high-capacity data processing and data analytics, and the application of mathematical and statistical methodologies to data, he explained. That is, of course, not a very appealing name, but that is what it is. But the self-learning or self-empowering element is very important in AI because you have to look at it in comparison to traditional data processing.

Continuing this theme of caution he further explained: Like old technology, AI enhances human and organisational capability for the better, but potentially also for the worse. So, it really depends on what use you make of that tool. You can make very positive use of it. But you can also make very negative uses of it. And thats why governance of your artificial intelligence and machine learning, and potentially rules and ethics, are important.

For financial regulators, AI is proving useful to help process the vast amounts of data and reports that companies must submit. It goes beyond human capability, or you have to put lots and lots of people onto it to process just the incoming information, he said.

Read more:Biden sets out AI Bill of Rights to protect citizens from threats from automated systems

Kerstens then mentioned AIs potential for law enforcement. Monitoring the vast volumes of money moving through the financial system for fraud, sanctions and money laundering requires very powerful systems. But this is also risky because it comes very close to mass surveillance, he said. So, if you apply artificial intelligence or machine learning engines onto all of these flows, you really get into this dystopian future of Big Brother.

Kerstens also touched on AIs use in understanding macroeconomic developments. Typically, macro- economic policy assessment is very politically driven, and this blurs the objectivity of the assessment. AI assessment is much more independent, because it just looks at the data without any preconceived notions and draws conclusions, including conclusions that may not necessarily be very desirable, he said.

The US Treasurys Keller described the ultimate aim of AI as being to improve decision accuracy, forecasting and speed trying to use data to make scientific decisions. This includes, he continued, testing and verifying our assumptions with data to help make sure that we dont break things, but also help us ask important questions.

He provided four AI use areas for the Bureau of the Fiscal Service: Treasury warrants (authorisations that a payment be made); fraud detection; monitoring; and entity resolution.

In the first area, he said the focus was turning bills into literally a dataset the bureau has experimented with using natural language processing to turn written legislation into coherent, machine-readable data that has account codes and budgeted dollars for those account codes; in the second area, he said the focus was checking people are who they say they are (and how we detect that at scale); in the third area, uses include monitoring whether people are using services correctly.

Were collecting data from so many elements, and often in large public-sector areas, the left hand doesnt talk to the right hand, he said, in the context of entity resolution. We often need to find a way to connect these two up in such a way that we are looking at the same entity so that we can share data in the long run. So, data can be brought together and utilised by data scientists or eventually to create AI that would help these other three things to happen.

Read more: Artificial intelligence in the public sector: an engine for innovation in government if we get it right

Keller also raised ethical, upskilling and cultural considerations. If people start buying IT products that are going to have AI organically within them, or theyre building them [questions should arise such as]: are we doing it ethically? Do we have analytics standards? How are we testing? Are we actually getting value from the product? Or is it a total risk?.

He concluded his opening remarks by outlining how the bureau was building an internal data ecosystem, including a data governance council, data analytics lab, high-value use case compendium and data university.

The Centre for Data Ethics & Innovation (CDEI), which is part of the UK Department for Digital, Culture, Media and Sport, was established three years ago to drive responsible innovation across the public sector.

A huge focus is around supporting teams to think about governance approaches, the centres deputy director, Sam Cannicott, explained. How do they develop and deploy technology in a responsible way? How do they have mechanisms for identifying and then addressing some of the ethical questions that these technologies raise?.

The CDEI has worked with a varied cross-section of the public sector including the Ministry of Defence (to explore responsible AI use in defence); police forces; and the Department for Education and local authorities to explore the use of data analytics in childrens social care. These are all really sensitive often controversial areas, but also where data can help inform decision-making, he said.

Read more: Canada to create top official to police artificial intelligence under new data law

The CDEI does not prescribe what should be done. Instead it helps different teams to think through these questions themselves.

Ultimately, the questions are complex, Cannicott said. While lots of teams might seek an easy answer, [to] be told what youre doing is fine, its often more complicated, particularly when we look at how you develop a system, then deploy it, and continue to monitor and evaluate. So, we support teams to think about the whole lifecycle process.

The CDEIs current work programme is focused on three areas: building an effective AI assurance ecosystem (including exploring standards and impact assessments, as well as risk assessments that might be undertaken before a technology is deployed); responsible data access, including a focus on privacy-enhancing technologies; and transparency (the CDEI has been working with theCentral Digital and Data Officeto develop the UKs first public sector algorithmic transparency standard).

This is underpinned by a public attitudes function to ensure citizens views inform the CDEIs work important when it comes to the critical challenge of trust.

Dr Joseph Castle, adviser on strategic relationships and open source technologies at SAS, described how public authorities around the globe are using AI across diverse set of fields, ranging from areas such as infrastructure and transport through to healthcare.

In government finance, he said, authorities are using analytics and AI to assess policy, risk, fraud and improper payments.

Castle, who previously worked for more than 20 years in various US federal government roles, provided two examples of SAS work in the public sector: with Italys Ministry of Economics and Finance (MEF), and with Belgiums Federal Public Service Finance.

In the Italian example, he said MEF used analytics to calculate risk on financial guarantees, providing up-to-date reporting for improved systematic liquidity and risk management during COVID-19; work with the Belgian ministry, meanwhile, has been on using analytics and AI to predict the impact of new tax rules.

The most recent focus for public entities has been on AI research and governance, leading to a better understanding of AI technology itself and responsible innovation, he said. Public sector AI maturation allows for improved service, reduced costs and trusted outcomes.

Australias National Artificial Intelligence Centre launched in December 2021. It aims to accelerate positive AI adoption and innovation to benefit businesses and communities.

Stela Solar, who is the centres director, described AIs ability to scale as incredibly powerful. But, she said, it is incredibly important that organisations exploring and using AI tools do so responsibly and inclusively.

In opening remarks reflecting the centres focus, she proposed three factors that would be important to help maximise AIs impact beyond government.

The first, she said, is that more should be done to connect businesses with research- and innovation-based organisations. A national listening tour organised by the centre had found, she said, low awareness of AIs capabilities. Unless we empower every business to be connected to those opportunities, we wont really succeed, she warned.

Her second point focused on small- and medium-sized businesses. Much of the guidance that exists is really targeted at large enterprises to experience, create and adopt AI, she said. But small and medium business is really struggling in this area, which is ironic as AI really presents as a great equaliser opportunity because it can deal with scale and take action at scale. It can really uplift the impact that small and medium businesses can have.

Her third point focused on community understanding, which she described as a critical factor in accelerating the uptake of AI technologies. This includes achieving engagement from diverse perspectives in how AI is shaped, created [and] implemented.

Topics including trust in AI systems, the risk of bias and overcoming scepticism were addressed further during the webinars Q&A.

In terms of trust, what goes in to any AI tool affects what comes out. How reliable they are [AI systems] depends on how good and how unbiased the dataset was, Kerstens said. Does it have known biases or something that is a proxy for biases? For example, sometimes people use addresses. Peoples addresses, especially in countries where you have very diverse populations, and where different population groups and different racial or religious groups live in particular areas, can be a proxy for religious affiliation, or for race. If youre not careful, your artificial intelligence engine is going to build in these biases, and therefore its going to be biased.

Its not just about bias within AI, its bias in the data, said Castle, emphasising the importance of responsible innovation across the analytics lifecycle.

Read more: Brazils national AI strategy is unachievable, government study finds

Solar provided a further dimension, adding that organisations can often find themselves working with substantial gaps in data (which she referred to as data deserts). Its actually been impressive to see some of the grassroots efforts across communities to gather datasets to increase representation and diversity in data, she said, giving examples from Queensland and New South Wales where, respectively, communities had provided data to help shape and steer investments and fill gaps in elderly health data.

On this theme she said that co-design of AI systems with the communities who the technology serves or affects will go a long way to address some of the biases and also will go a long way into the question of what should be done and what shouldnt be done.

Scepticism about the use of AI from policymakers, particularly those who are not technologists, was discussed as a common challenge.

Sometimes theres a push to use these technologies because they can be seen as a way to save money, observed Cannicott. There is also nervousness because some have seen where things have gone wrong, and they dont want to be to blame.

He emphasised the importance of experimentation, governance (having really clear accountability and decision-making frameworks to walk through the ethical challenges that might come up and how you might address them) and public engagement.

Some polling we did fairly recently suggested that around half of people dont think the data that government collects from them is used for their benefit, he said. Theres quite a bit of a trust gap there [so] decision makers [have] to start demonstrating that they are able to use data in a way that benefits peoples lives.

Keller emphasised the importance of incorporating recourse into AI systems. If I build a system that detects fraud, and flag somebody is a villain and theyre not, we need to give them an easy route to appeal that process, he said.

AI is often a purely technical conversation. But, when it comes to government use of AI, policy and politics inevitably get entwined.

To develop artificial intelligence, you need vast amounts of data. Europeans tend to look at personal data protection in a different way than people in the US do, pointed out Kerstens.

Organisational leaders driven by doctrines could struggle to accept a role for AI. If you run an organisation or a governmental entity based on politics, artificial intelligence isnt something youre going to like very much because it is the data speaking to you, he continued. They do like artificial intelligence and data when the data confirms a doctrinal or political view. But if the data does not support [their] view, theyll dismiss it.

Public sector agencies also need to be savvy about AI solutions they are buying. Increasingly, public-sector organisations are being sold off-the-shelf tools. And actually, thats quite a dangerous space to be in, said Cannicott. Because, for example, if you [look at] childrens social-care different geographies, different populations theres all sorts of different factors in that data. If youre not clear on where the data is coming from to build those tools initially, then you probably shouldnt be using that technology. Thats also where testing and experimentation is very important.

There is clearly momentum building behind AI. But an over-riding theme from the webinar was the extent to which many remain in the dark or deeply sceptical.

Often Ive seen AI be implemented by someone whos very passionate, and it stays as this hobby experiment and project, said Solar, emphasising the importance of developing a base-level understanding of AI across all levels of an organisation. For it really to get the momentum across the organisation and to be rolled out into full production, with all the benefits that it can bring, you really need to bring along the policy decision-makers, the leaders the entire organisational chain, she said.

Kerstens concluded by emphasising that the story of AIs growing deployment across the public sector (and beyond) remains in the early chapters. AI is very powerful. Its just very early days, he said. But what people are most afraid of is that they dont understand how the artificial intelligence engine thinks. We should focus on productive, useful applications and not the nefarious ones.

AIs advocates will be hoping that fewer people, over time, come to compare it to the tale to Frankenstein.

The Global Government Fintech webinar How can AI help public authorities save money and deliver better outcomes? was held on 4 October 2022, with the support of knowledge partners SAS and Intel. You can watch the 75-minute webinar via our dedicated event page.

Read more: AI intelligence: equipping public and civil service leaders with the skills to embrace emerging technologies

Go here to see the original:

AI-powered government finances: making the most of data and machines - Global Government Forum

Related Posts

Comments are closed.