Pause AI? Naïveté or stupidity?
We should pause any development of AI for the next 6 months. That's the message from the Future of Life Institute, which published an open letter which subsequently was signed by about 1,700+ people by the time I started writing this article.
Headlined “Pause Giant AI Experiments” the letter goes on and on about the dangers of this new technology, the end of civilization, also something about humanity and blah blah blah ...
I read the open letter and to spare you the same agony, I asked AI to provide a summary. However, I asked AI to provide a brief summary for a different audience. You get to pick which one is closer to your heart.
Summary for a CEO:
The article advocates for a pause in the development of AI systems with human-level intelligence and calls for a focus on making current AI systems safer, transparent, and trustworthy. It also emphasizes the need for AI developers to work with policymakers to create robust AI governance systems.
Summary for an elementary school teacher:
This article highlights the risks posed by AI systems with human-competitive intelligence and the need for careful planning and management to ensure positive effects and manageable risks. The authors call for a 6-month pause on the training of AI systems more powerful than GPT-4, during which time shared safety protocols for advanced AI design and development should be developed and implemented. AI research and development should focus on making existing systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal, and policymakers should accelerate the development of AI governance systems to cope with potential disruptions to democracy.
Summary for a conspiracy theorist:
The text argues that AI systems with human-competitive intelligence can pose profound risks to society and humanity, as acknowledged by top AI labs and extensive research. The development of powerful, state-of-the-art AI systems must be paused, and shared safety protocols for advanced AI design and development must be jointly developed and implemented, overseen by independent outside experts. AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems to cope with the dramatic economic and political disruptions that AI will cause. Ultimately, humanity can enjoy a flourishing future with AI if we engineer these systems for the clear benefit of all and give society a chance to adapt, rather than rushing unprepared into a fall.
So far, I am finding ChatGPT useful. If I can get a quick summary for every article I come across and can adjust the comprehension level, that's pretty good. Why would I want to stop that?
The challenge in arguing about the content of this Open Letter are the generic, well meaning statements. How could I even think of debating with posed questions like 'Should we risk loss of control of our civilization?' Only a monster would do something like this.
However, the naïveté of the letter is exactly demonstrated by these question:
Should we let machines flood our information channels with propaganda and untruth?
Should we automate away all the jobs, including the fulfilling ones?
Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
I would like to remind the signatories of this document that since Gutenberg built the press, machines have been supplying us, using every possible information channel, with propaganda and lies. People have been happy to feed these machines with all kinds of untruths.
Yes, in many monasteries the resentment among monks for losing their jobs is still raw. And let me tell you, these were very fulfilling jobs.
Just as worrying, the writers of the letter say that if industry doesn't get on board with their direction, the government should step in to create a moratorium for at least 6 months.
That would put a pause on any further development of more powerful systems, where we should 'enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt' (what about the poor bastards in Australia, New Zealand and South America!) And talking about adaptation to new technology? Humanity is still reeling from the introduction of the 'Start' button in Windows 95, which was used to shutdown the computer.
And last but not least, let’s look at the signatories of this Open Letter. Unfortunately, due to high demand, the signing of this document has been suspended. The signatories are either from business or academia. I am sure that anyone from business who signed the document will either immediately stop working on advancing any AI project or if in position of authority, will pause any AI development in their respective organization. While all the people from academia will get together and develop within the next 6 months comprehensive guidelines for responsible AI development. Question: What have you been doing the last 20 years?
As you can expect nobody (of any importance or at all) signed this from OpenAI, Google, Microsoft, IBM or other organizations working hard to learn and build a competitive advantage. Then there are a few names representing companies which are involved in building AI systems. Should we presume that these are the 'good guys' or they are just desperate and do anything to slow down the competition?
The special award in the Irony Hall of Fame goes to Mr. Musk. Yes, that Mr. Musk, who provided the initial funding to OpenAI, is building a robot in his Tesla factory to do "anything that humans don’t want to do" and is the founder of Neuralink, which is building the chip-in-the-brain implant with "the eventual goal of human enhancement." (Ask Dr. Octopus how well his safeguards worked out, doing something similar.) Mentioning Twitter, the fountain of truth which trickles out knowledge, is somehow redundant.
Does it mean that everything is well in the world of AI? No. There are lots of things which should be thought through and improved.
Are there long running efforts to understand this technology and make it better? Of course! Efforts like Explainable AI (XAI) have been around for a long time.
Do we have proposed frameworks for designing transparent and safe technology? Absolutely. The AI RMF has been out since January of this year.
The technology called AI is in the spotlight because we have to keep the print presses busy. It is the circus monkey, keeping us entertained. As with any technology introduced in the past, it will be used for good and for bad. As in the past, humanity will adapt and keep going forward. That's the recurrent pattern.