Meta, Google, and AI firms agree on security measures at Biden meeting

Meta Google and AI firms agree on security measures at | ltc-a

Seven leading AI companies in the United States have agreed to voluntarily safeguard the development of the technology, the White House announced Friday, pledging to manage the risks of the new tools even as they compete on the potential of artificial intelligence.

The seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – formally made their commitment to new standards of safety, security and trust in a meeting with President Biden at the White House on Friday afternoon.

“We must be alert and vigilant about the threats emerging from emerging technologies that they may pose — they don’t have to but they may pose — to our democracy and our values,” Biden said in brief remarks from the Roosevelt Room at the White House.

“This is a serious responsibility; we have to do it right,” he said, joined by company executives. « And there’s also huge, huge upside potential. »

The announcement comes as companies are racing to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human input. But technological advances have prompted fears about the spread of misinformation and dire warnings of an « extinction risk » as artificial intelligence becomes more sophisticated and human-like.

The voluntary safeguards are only an interim first step as Washington and governments around the world seek to put legal and regulatory frameworks in place for AI development. The deals include testing products for security risks and using watermarks to make sure consumers can spot the AI-generated material.

But lawmakers have struggled to regulate social media and other technologies in ways that keep pace with rapidly changing technology.

The White House did not provide details on an upcoming presidential executive order that aims to address another problem: how to control the ability of China and other competitors to get hold of new AI programs or the components used to develop them.

The order is expected to bring new restrictions on advanced semiconductors and export restrictions on large language models. Those are hard to protect: most software can fit, compressed, on a thumb drive.

An executive order could provoke more industry opposition than Friday’s voluntary commitments, which experts say are already reflected in the practices of the companies involved. The promises will not curb the plans of AI companies or hinder the development of their technologies. And as voluntary commitments, they will not be enforced by government regulators.

“We are pleased to make these voluntary commitments along with others in the industry,” Nick Clegg, president of global affairs at Meta, Facebook’s parent company, said in a statement. “They are an important first step in ensuring responsible AI guardrails are established and create a model for other governments to follow.”

As part of the safeguards, the companies agreed to security testing, in part by independent experts; research on bias and privacy issues; sharing information about risks with governments and other organizations; development of tools to combat societal challenges such as climate change; and transparency measures to identify AI-generated material.

In a statement announcing the deals, the Biden administration said companies must ensure that « innovation does not come at the expense of the rights and safety of Americans. »

« Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, » the administration said in a statement.

Brad Smith, president of Microsoft and one of the executives present at the White House meeting, said his company has approved the voluntary safeguards.

“Moving quickly, the White House commitments create a foundation to help ensure the promise of AI stays above its risks,” Smith said.

Anna Makanju, vice president of global affairs at OpenAI, described the announcement as « part of our continued collaboration with governments, civil society organizations and others around the world to advance AI governance. »

For companies, the standards outlined Friday serve two purposes: an effort to preempt, or shape, legislative and regulatory moves with self-policing, and a signal that they’re approaching new technology thoughtfully and proactively.

But the rules they’ve agreed upon are largely the lowest common denominator and can be interpreted differently by every company. For example, companies have committed to stringent cybersecurity measures on the data used to build the language models upon which Generative AI programs are built. But there’s no specificity about what that means, and companies still have an interest in protecting their intellectual property.

And even the most careful companies are vulnerable. Microsoft, one of the companies attending the White House event with Biden, rushed last week to thwart a Chinese government-organized hack on the private emails of American officials dealing with China. It now appears that China has stolen, or somehow obtained, a « private key » held by Microsoft which is the key to email authentication, one of the company’s most closely guarded pieces of code.

Given these risks, the deal is unlikely to slow efforts to pass legislation and impose regulation on emerging technology.

Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights, said more needs to be done to protect against the dangers AI poses to society.

« The voluntary commitments announced today are not enforceable, which is why it is imperative that Congress, along with the White House, promptly craft legislation that requires transparency, privacy protections, and intensified research into the wide range of risks posed by generative AI, » Barrett said in a statement.

European regulators are poised to adopt AI laws later this year, which has prompted many companies to encourage US regulations. Several lawmakers have introduced bills that include licensing AI companies to release their technologies, creating a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreeing on the rules.

Lawmakers have been grappling with how to deal with the rise of AI technology, with some focused on the risks to consumers and others acutely concerned about falling behind opponents, especially China, in the race for dominance in the sector.

This week, the House Committee on Competition with China sent bipartisan letters to US-based venture capital firms asking them to reckon on the investments they had made in Chinese artificial intelligence and semiconductor companies. For months, a series of House and Senate panels have polled the AI ​​industry’s most influential entrepreneurs and critics to determine what kinds of legislative barriers and incentives Congress should explore.

Many of these witnesses, including OpenAI’s Sam Altman, have pleaded with lawmakers to regulate the AI ​​industry, underlining the potential for the new technology to cause undue harm. But that regulation has been slow to take effect in Congress, where many lawmakers still struggle to understand what exactly AI technology is.

In an effort to improve lawmakers’ understanding, New York Democrat and Majority Leader Senator Chuck Schumer began a series of sessions this summer to hear from government officials and experts on the merits and dangers of artificial intelligence in a range of fields.

Karoun Demirjian contributed to the reporting from Washington.