In US, Regulating AI Is in Its ‘Early Days’ – Trending2days


controlling artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing Voluntary AI safety commitments by seven technology companies on Friday.

But a closer look at the activity raises questions about how meaningful the actions are in setting policies around the rapidly evolving technology.

The answer is that it is not very meaningful yet. The United States is only at the beginning of what is likely to be a long and difficult path toward the creation of AI rules, lawmakers and policy experts said. While there have been hearings, meetings with top tech executives at the White House and speeches to introduce AI bills, it is too soon to predict even the roughest sketches of regulations to protect consumers and contain the risks that the technology poses to jobs, the spread of disinformation and security.

“It is still early days, and no one knows what a law would look like yet,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other tech companies.

The United States remains far behind Europe, where lawmakers are preparing to enact an AI law this year that would put new restrictions on what are seen as the technology’s riskiest uses, In contrast, there remains a lot of disagreement in the United States on the best way to handle a technology that many American lawmakers are still trying to understand.

That suits many of the tech companies, policy experts said. While some of the companies have said they welcome rules around AI, they have also argued against tougher regulations akin to those being created in Europe.

Here’s a rundown on the state of AI regulation in the United States.

The Biden administration has been on a fast-track listening tour with AI companies, academics and civil society groups. The effort began in May when Vice President Kamala Harris met at the white house with the chief executives of Microsoft, Google, OpenAI and Anthropic and pushed the tech industry to take safety more seriously.

On Friday, representatives of seven tech companies appeared at the White House to announce a set of principles for making their AI technologies safer, including third-party security checks and watermarking of AI-generated content to help stem the spread of misinformation.

Many of the practices that were announced had already been in place at OpenAI, Google and Microsoft, or were on track to take effect. They don’t represent new regulations. Promises of self-regulation also fell short of what consumer groups had hoped.

“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of AI is fair, transparent and protects individuals’ privacy and civil rights.”

Last fall, the White House introduced a blueprint for an AI Bill of Rights, a set of guidelines on consumer protections with the technology. The guidelines also aren’t regulations and are not enforceable. This week, White House officials said they were working on an executive order on AI, but didn’t reveal details and timing.

The loudest drumbeat on regulating AI has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include the creation of an agency to oversee AI, liability for AI technologies that spread disinformation and the requirement of licensing for new AI tools.

Lawmakers have also held hearings about AI, including a hearing in may with sam altman, the chief executive of OpenAI, which makes the ChatGPT chatbot. Some lawmakers have tossed around ideas for other regulations during the hearings, including nutritional labels to notify consumers of AI risks.

The bills are in their earliest stages and so far do not have the support needed to advance. Last month, the Senate leader, Chuck Schumer, Democrat of New York, announced a monthslong process for the creation of AI legislation that included educational sessions for members in the fall.

“In many ways we’re starting from scratch, but I believe Congress is up to the challenge,” he said during a speech at the time at the Center for Strategic and International Studies.

Regulatory agencies are beginning to take action by policing some issues emanating from AI

Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT and asked for information on how the company secures its systems and how the chatbot could potentially harm consumers through the creation of false information. The FTC chair, Lina Khan, has said she believes the agency has ample power under consumer protection and competition laws to police problematic behavior by AI companies.

“Waiting for Congress to act is not ideal given the usual timeline of congressional action,” said Andres Sawicki, a professor of law at the University of Miami.


Leave a Comment

Your email address will not be published. Required fields are marked *