[ad_1]
The UN Security Council for the first time held a session on Tuesday on the threat that artificial intelligence poses to international peace and stability, and Secretary General António Guterres called for a global watchdog to oversee a new technology that has raised at least as many fears as hopes.
Mr. Guterres warned that AI may ease a path for criminals, terrorists and other actors intent on causing \”death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.\”
The launch last year Chat GPT — which can create texts from prompts, mimic voices and generate photos, illustrations and videos — has raised the alarm about disinformation and manipulation.
On Tuesday, diplomats and leading experts in the field of AI laid out for the Security Council the risks and threats — along with the scientific and social benefits — of the new emerging technology. Much remains unknown about the technology even as its development speeds ahead, they said.
“It\’s as though we are building engines without understanding the science of combustion,” said Jack Clark, co-founder of Anthropic, an AI safety research company. Private companies, he said, should not be the sole creators and regulators of AI.
Mr. Guterres said a UN watchdog should act as a governing body to regulate, monitor and enforce AI regulations in much the same way that other agencies oversee aviation, climate and nuclear energy.
The proposed agency would consist of experts in the field who shared their expertise with governments and administrative agencies that might lack the technical know-how to address the threats of AI.
But the prospect of a legally binding resolution about governing it remains distant. The majority of diplomats did, however, endorsed the notion of a global governing mechanism and a set of international rules.
\”No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors,\” said Britain\’s foreign secretary, James Cleverly, who presided over the meeting because Britain holds the rotating presidency of the Council this month. .
Russia, departing from the majority view of the Council, expressed skepticism that enough was known about the risks of AI to raise it as a source of threats to global instability. And China\’s ambassador to the United Nations, Zhang Jun, pushed back against the creation of a set of global laws and said that international regulatory bodies must be flexible enough to allow countries to develop their own rules.
The Chinese ambassador did say, however, that his country opposed the use of AI as a \”means to create military hegemony or undermine the sovereignty of a country.\”
The military use of autonomous weapons in the battlefield or in another country for assassinations, such as the satellite-controlled AI robot that Israel dispatched to Iran to kill a top nuclear scientist, Mohsen Fakhrizadeh, was also brought up.
Mr. Guterres said that the United Nations must come up with a legally binding agreement by 2026 banning the use of AI in automated weapons of war.
Prof. Rebecca Willett, director of AI at the Data Science Institute at the University of Chicago, said in an interview that in regulating the technology, it was important not to lose sight of the humans behind it.
The systems are not entirely autonomous. and the people who design them need to be held accountable, she said.
\”This is one of the reasons that the UN is looking at this,\” Professor Willett said. “There really needs to be international repercussions so that a company based in one country cannot destroy another country without violating international agreements. Real enforceable regulation can make things better and safer.
[ad_2]