Australia’s Minister for Superannuation, Financial Services and the Digital Economy Jane Hume has assured the country’s AI ethics framework will remain voluntary for the foreseeable future.
As part of her address during the virtual CEDA AI Innovation in Action event on Tuesday, Hume explained there were sufficient regulatory frameworks in place and that another one would be unnecessary.
“We already have a very strong regulatory framework; we already have privacy laws, we already have consumer laws, we already have a data commissioner, we already have a privacy commissioner, we have a misconduct regulator. We have all those guardrails that already sit around the way we run our businesses,” she told ZDNet.
“AI is simply a technology that’s being imposed upon an existing business. It’s important that technology is being used to solve problems. The problems themselves haven’t really changed, so our regulations certainly have to be flexible enough to accommodate technology changes … we want to make sure that there’s nothing in regulations and legislation that prevents the advancement of technology.
“But at the same time, building new regulations for technology, unless we can see a use case for it, is something that we would be reluctant to do, to over legislate and overprescribe.”
The federal government developed the national AI ethics framework in 2019, following the release of a discussion paper by Data61, the digital innovation arm of the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
The discussion paper highlighted a need for development of AI in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration.
Making up the framework are eight ethical principles: Human, social and environment wellbeing; human-centred values in respect to human rights, diversity, and the autonomy of individuals; fairness; privacy protection and security of data; reliability and safety in accordance with the intended purpose of the AI systems; transparency and explainability; contestability; and accountability.
Hume believes the principles have been designed in a way that make them “kind of universal” and therefore industry would be willing to adopt them voluntarily.
“There’s nothing in there that people would feel uncomfortable with, there’s nothing that’s too prescriptive … these are all things that we would expect. I think there’s nothing in there that is particularly onerous,” she said.
While developing these principles is one thing, applying them can be entirely different, Hunt admitted.
“You’ve got to have the right governance structures, for instance. But you have to have the right governance structures in your organisation for many things, workplace safety, for instance, is a good example,” she said.
“I think that we would like to see the broader industry, whatever industry that’s adopting AI technologies, to sign up to those frameworks, voluntarily, rather than having something that’s top down and imposed.”
For Microsoft, it voluntarily adopted the AI framework by developing an internal governance structure “to enable progress and accountability rules to standardise responsible AI requirements, training and practices to help our employees act on our principles, and to think deeply about the impacts of AI systems”, Microsoft Australia corporate affairs director Belinda Dennett said during the event.
Microsoft was one of the first companies to put its hands up to test run the AI ethics principles, to ensure they could be translated into real-world scenarios.
The other companies were National Australia Bank, Commonwealth Bank, Telstra, and Flamingo AI.
Earlier this month, CBA revealed that testing the ethics principles during the creation and design of its Commbank app feature Bill Sense gave insight into how the bank could apply responsible AI at scale.
“It was great to see that the AI principles sat really neatly with the control and governance frameworks that the bank already had in place. Things like the safe management of data, customer privacy, and transparency have been central to the way we operate since long before the advent of AI,” CBA chief decision scientist Dan Jermyn told ZDNet.
“But the pace, scale, and sophistication of AI solutions mean we need to ensure we are constantly evolving to meet the demands of new technology, which is why collaboration with our partners across government and industry is so important.”
In a bid to ensure AI is being applied responsibly, the bank has developed a tool to make it easier for teams across the bank to deliver AI safely to scale, according to Jermyn.
“For example, we have developed ‘explainable AI’ capability, which makes it simple for any of our business teams to understand and explain the key drivers of even the most complex deep learning models,” he said.
He added that responsible application of AI will be necessary as CBA continues to use it as a tool to improve customer experience.
“We see AI as a key enabler for us in providing a great, personalised experience to all of our customers, and so we are committed to ensuring we apply it in a consistently fair and ethical manner,” Jermyn said. “As we continue to grow the ways in which AI helps us to support the financial wellbeing of our customers and communities, it’s essential that we do so in a responsible and sustainable way.”