A group of AI experts, including Elon Musk, have called for a six-month pause in developing more powerful systems than the newly launched GPT-4 (a large-scale language model) of the company OpenAI, in an open letter alleging the dangers it can create for society and humanity.
Earlier this month, OpenAI, backed by Microsoft, presented the fourth update of its popular AI GPT (Generative Pre-trained Transformer) program, which has captivated users with its wide range of applications, from engaging human-like conversations to composing songs and summarizing extensive documents.
The letter, published by the nonprofit organization Future of Life Institute, called for a pause in the development of advanced AI until independent experts developed, applied and audited safety protocols shared for such designs.
‘Powerful AI systems should only be developed when we are sure that their effects will be positive and their controllable risks’, says the misive.
Economic and political disorders
The letter details the potential risks to society and civilization of AI systems in the form of economic and political disruptions, and calls on developers to work with policy makers.
Among the signatories are Emad Mostaque, CEO of 'Stability AI', 'DeepMind' researchers owned by 'Alphabet', and the heavy weights of AI Yoshua Bengio, often considered one of the 'AI godfathers', and Stuart Russell, pioneer of research in this field.
According to the European Union's transparency register, the 'Future of Life Institute' is funded mainly by the 'Musk Foundation', as well as by the London effective altruism group 'Founders Pledge' and the 'Silicon Valley Community Foundation'.
On Monday, Europol joined ethical and legal concerns about advanced AI such as ChatGPT, warning of possible misuse of the system in attempts to supplant identity, disinformation and cybercrime. Meanwhile, the British Government put forward proposals for a ‘adaptable’ regulatory framework around AI.
The government’s approach, outlined in a political document published on Wednesday, would divide the responsibility of regulating artificial intelligence (AI) among its human rights, health and safety regulators, and competence, rather than creating a new body dedicated to technology.
Transparency
Musk, whose car manufacturer Tesla is using AI for an autopilot system, has openly expressed concern. Since its launch last year, OpenAI’s ChatGPT has driven its rivals to accelerate the development of large similar language models and companies to integrate generative AI models into their products.
Last week, OpenAI announced that it had partnered with a dozen companies to integrate their services into their chatbot, allowing ChatGPT users to buy at Instacart or book flights at Expedia. Sam Altman, CEO of OpenAI, has not signed the letter, a spokesman for 'Future of Life' told Reuters.
“The letter is not perfect, but the spirit is right: we have to stop until we understand the ramifications better,” said Gary Marcus, a professor at New York University who signed the letter. ‘The big players are becoming increasingly hermetic about what they do, which makes it difficult for society to defend itself against any damage that may materialise’.
Critics accused the signatories of the letter of promoting the "AI publicity hype," arguing that claims about the current potential of technology had been greatly exaggerated.
‘Such statements are intended to create expectation. Johanna Björklund, an AI researcher and associate professor at the University of Umeå, says: "Her goal is to worry people. ‘I do not think it is necessary to put a brake on the hand’.
Rather than pause the research, he said, AI researchers should be subject to higher transparency requirements. “If you do research on AI, you should be very transparent about how you do it".