Associate Professor Grant Duncan.
In March, signed by Elon Musk and Steve Wozniak (among many others) called on all Artificial Intelligence (AI) labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, and for governments to step in and institute a moratorium if the labs were slow in doing so voluntarily. AI will undoubtedly produce profound changes in society and government, but the letter asked, rhetorically, 鈥楽hould we risk loss of control of our civilization?鈥
This assumed that 鈥榦ur civilization鈥 is something that 鈥榳e鈥 had been controlling. Well, I certainly wasn鈥檛 in the control room and I didn鈥檛 know there was one.
The letter called for comprehensive governmental actions, including safety protocols, certification, independent auditing, dedicated regulatory agencies, publicly-funded research (revealing that there were academics involved) and laws and institutions that would deal with the harmful disruptions to economies and democracy that AI will cause.
These aren鈥檛 bad suggestions and such things will emerge, though not as quickly as one might hope, certainly but not immediately.
Critics of the letter fairly pointed out that a six-month moratorium would be hard to police and would impose a mere speed bump on the way to accelerated innovation.
There was no mention of regulating classified military research and development. The United States Congress would probably take much longer than six months to agree on a moratorium, if they could agree at all. Can you imagine libertarian swamp-draining Republicans supporting more governmental controls - at the behest of Silicon Valley? Or were Musk, Wozniak et al. just wanting to buy time to catch up with competitors?
The open letter was unrealistic about the effectiveness of a six-month pause and about how much governments could achieve in that time, given the pace of innovation, but it did stimulate debate.
The Chinese would ignore a self-imposed American pause, but they鈥檝e gone ahead with drafting their own rules. They don鈥檛 want AI bots subverting the state or the communist party or spreading ideas that might undermine national unity and harmony. This may put a brake on innovation, but the Chinese know who鈥檚 in control of their civilization! And the European Union will take a close look at privacy concerns and regulate on those grounds.
Getting back to America, it鈥檚 remarkable how big-tech entrepreneurs and intellectuals looked to government for a solution. This showed, at least, a sense of responsibility for future consequences of AI.
鈥楳ove slow for a while please, and try not to break things.鈥
There are genuinely things to worry about here though. There鈥檚 already been lots written about social media and the algorithms, although mostly by people who wouldn鈥檛 recognise an algorithm if they tripped over one, and about how these systems pre-select what we view online based on our past choices.
The problem is not so much algorithms, which are just iterative rules or formulae named after a Muslim mathematician (al-岣磜膩rizm墨) from the ninth century. The main concern is machine learning, by which programming improves without human intervention as it processes new information and thus detaches a system鈥檚 development from human awareness, control and accountability.
Suppose you acquire a bot that converses like a human, remembers things about you that you鈥檝e forgotten and anticipates your needs and hence advises you about what to do. This is fine if it鈥檚 managing your diary. But suppose it starts to influence your ethical and political choices by learning more about you than you could ever retain.
For example, would it naively remind you, for conversation鈥檚 sake, that last month you criticised the politician whom you just praised? Perhaps you鈥檇 clicked on the option that asks it to help you be more logical.
If this kind of machine learning controlled whole systems, could its effects somehow change our civilization, including our politics, in ways that feel out of our control?
It took more than three centuries for the idea that Jesus of Nazareth was resurrected to 鈥榞o viral鈥 and infect large numbers of Roman citizens and their emperors, to the extent that they would abandon their traditional deities and solemnly celebrate that miraculous rising once a year. They would sometimes even burn people alive for espousing unorthodox ideas about the nature of that Jewish guy who was reportedly crucified by Roman soldiers.
Could we see situations arise where our gullible minds are influenced in absurd, and sometimes dangerous, ways as a result of machine learning by robotic devices? And could this proliferate absurd and dangerous ideas at rates that we鈥檝e not seen before?
Yes, we are daft enough to let that happen to us, if we go by historical examples. And AI could indeed undermine the norms of political processes such as public debate and free elections, especially if it does things on a large scale that programmers and regulators hadn鈥檛 anticipated. But, we are always being influenced by someone or something anyway. And yet we are also quite smart.
For example, it was always up to you to read this, or not, and you are making up your own mind about it as you go. If you don鈥檛 believe me, then your disbelief is helping to prove my point!
AI could indeed one day control our politics 鈥 if we fail to use our critical faculties.
Associate Professor Grant Duncan teaches political theory and New Zealand politics for 暴风资源.
Related news
Opinion: Anxiety and the election 鈥 voter volatility in pandemic/politics mix
Since the first round of the Stuff/Massey pre-election survey (12-19 July), a lot has happened. National's leader Todd Muller unexpectedly resigned and was promptly replaced by Judith Collins, says political commentator Grant Duncan.
Opinion: Why do we need governments anyway?
While people may distrust or dislike 'the government of the day,' they still rely on public services, says Associate Professor Grant Duncan.