Like the internet and Facebook before it, AI is the latest technological craze promising to take over the world and revolutionise life on earth, yet two Australian MPs warn it may present more doom than gloom.
Describing artificial intelligence (AI) as the “most significant technology development since the creation of the internet itself,” Australia’s shadow minister for communication, David Coleman, and shadow minister for government services and the digital economy, and shadow minister for science and the arts, Paul Fletcher warned its endless possibilities open the nation to its profound consequences.
In a press release welcoming the Federal Government’s publication of the Chief Scientists’ Rapid Response Information Report on Generative AI, and the Department of Industry, Science, and Resources’ discussion paper on these issues, the pair noted in addition to presenting society with risk, AI also offers the potential for “substantial benefits that will flow on from the technology.”
They shared their belief that government involvement in mitigating risks posed by AI is “likely,” with the ministers stressing the importance of all levels of government not over-reaching on its involvement in the sector, so as to not stifle innovation.
“To succeed in the future economy, Australia must succeed in AI,” the ministers said.
“As a nation, we cannot simply be users of AI. We need a flourishing AI sector, in which Australian businesses compete with the best in the world.”
They added the government “must play its part in helping to make this happen.”
Despite calling for less government intervention in the AI movement, the ministers did urge the introduction of protections ensuring the intellectual property of Australian businesses remain protected as AI’s prominence grows.
This is due to the fact that “generative AI models are likely drawing on the intellectual property of Australian companies today, without paying any compensation to those IP owners.”
“The government needs to move now to put in place intellectual property protections for the Australian media sector, and other affected sectors,” they added.
From excitement to caution, much has been made of the rapid rise of artificial intelligence, especially in recent months following the initial launch of arguably the most prominent global AI system, ChatGPT, last November, which has crept into all walks of global life, from the workplace to the education system, and spurred numerous responses including universities adopting ChatGPT detection technology into their marking systems.
Recently, there has been a surging tide of international commentary calling on AI to be temporarily wrangled in order to be permanently understood, with over 1,000 technology experts, including Elon Musk, one of the founders of OpenAI, the company behind ChatGPT, and Steve Wozniak, co-founder of Apple, signing an open letter calling for a six-month pause on AI’s development.
The letter described AI as presenting “profound risks to society and humanity,” revealed the belief of its signatories that rather than culling the revolutionary technology, which has the potential to work in unison with humans for a “flourishing future,” but rather that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Both the open letter and the ministers’ comments share similar sentiments to those aired by Sam Altman, chief executive officer of OpenAI and one of the masterminds behind ChatGPT, who spoke at a recent US Senate judiciary subcommittee on privacy, technology, and laws.
Mr Altman had stressed “regulation of AI is essential,” adding that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
He explained “OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives, but also, it creates serious risks that we have to work together to manage.”
One of the biggest perceived risks of AI is its influence on the employment market, specifically focused on its potential to automate numerous jobs, but the CEO stressed “there will be far greater jobs on the other side of this,” especially as the realisation dawns on society that ChatGPT and similar systems are “tool[s], not creatures.”