Washington watches Big Tech set its own rules for AI
With Congress unlikely to act quickly, the White House recently called on top industry CEOs and urged them to fill in the gaps on what “responsible AI” looks like.
For Microsoft, the result was a new five-point plan to regulate AI, focused on critical infrastructure cybersecurity and a licensing regime for AI models.
Meanwhile, Pichai and OpenAI CEO Sam Altman have made similar tours overseas, trying to shape the conversation about AI regulation in Europe.
Smith’s speech was attended by more than half a dozen lawmakers from both parties.
Google, for its part, published a blog post on Friday with its own political agenda.
And Smith’s Washington stoppage comes a week after Altman testified before a Senate committee on AI oversight, where lawmakers expressed broad support for Altman’s ideas and willingness to work with the Congress.
representing Derek Kilmer (D-Wash.), a senior member of the House Administration’s subcommittee on modernization, attended Thursday’s Microsoft event and suggested that Congress should listen carefully to companies developing AI.
“Congress isn’t always aware of these important technology issues,” Kilmer said. He suggested that it is not unusual for “people who have the most exposure, access and knowledge of some of these technologies to take an active role and engage policy makers on how to regulate these technologies”.
“Ultimately, however, policymakers will need to exercise independent judgment to do what is right for the American people,” Kilmer said.
Companies are ignoring the idea that they are in control: In a conversation with reporters after the event, Smith dismissed the idea that Microsoft, its business partner OpenAI or other big companies are “in the driver’s seat”. when it comes to federal AI. rules.
“I’m not even sure we’re even in the car,” said Smith, who previously posted a blog post in February with more abstract policy guidance on AI. “But we do offer vantage points and suggested routes for those who actually drive.”
Smith admitted that the tech industry “may have more concrete ideas” about AI regulation than Washington currently does. But he said that should change over the next few months.
“I bet you will see competing legislative proposals. We’ll probably like some more than others, but that’s democracy,” Smith said. “So I don’t care about all the ideas coming from the industry.”
The Microsoft executive isn’t the only tech bigwig hitting the road in an attempt to shape AI regulation. Pichai, CEO of Google was in Europe on Wednesday to negotiate a voluntary AI pact with the European Commission as the bloc puts the finishing touches to its AI law.
And a week after his own high-level visit to Washington, Altman made his AI policy tour of Europe. The OpenAI executive told a London audience on Wednesday that there are “technical limitations” that could prevent his company from complying with EU AI law. He warned that OpenAI could withdraw from Europe entirely unless significant changes are made to the legislation.
Russell Wald, director of policy at the Stanford Institute for Human-Centered Artificial Intelligence, recently said he worries that some policymakers, especially those in Washington, are paying too much attention to the tech industry’s governance proposals. of AI.
“It’s a little disappointing that … the realm of industry is the pure focus,” he said on the sidelines of last week’s Senate hearing on the government’s use of AI. . Wald suggested that academics, civil society, and government officials should all play a bigger role in shaping federal AI policy than they currently do.
representing Ted Place (D-California), an emerging leader in AI regulation who also attended Smith’s speech, told POLITICO it was “good to hear from the people who created artificial intelligence.” But sooner or later, he said, a wider range of voices will have to weigh in.
“It’s also very important to hear the huge diversity of perspectives on AI, ranging from researchers to advocacy groups — the Americans who are going to be affected,” Lieu said.
Microsoft’s AI vision
Smith urged Washington to adopt five new AI policy recommendations. Some are relatively straightforward — for example, the company wants the White House to push for broad adoption of the voluntary AI risk management framework released earlier this year by the National Institute of Standards and Technology. This framework has been at the heart of messages from the White House on the directions AI companies should follow.
“The best way to move fast – and we should move fast – is to build on the good things that already exist,” Smith said Thursday.
The company has asked lawmakers to require “safety brakes” for AI tools that control the operation of critical infrastructure, like power grids and water supply systems, which would ideally ensure that a human always be kept informed – a point on which Congress can largely agree.
Microsoft also called on policymakers to promote AI transparency and ensure academic and nonprofit researchers have access to advanced computing infrastructure — a stated goal of the National AI Research Resource, which has no yet been authorized or funded by Congress.
Microsoft also wants to work with the government through public-private partnerships. Specifically, the company wants the public sector to use AI as a tool to address “inevitable societal challenges”.
For the DC audience, Smith referenced Microsoft’s use of AI to help document war damage in Ukraine, or how it can create presentations and other materials for the workplace.
The most important part of Microsoft’s policy proposal calls for a legal and regulatory architecture tailored to the technology itself. Smith wants to “enforce existing laws and regulations” and create a licensing regime for the underlying AI models.
“As Sam Altman said before the Senate Judiciary Subcommittee last week, … we should have licenses in place, so that before such a model is rolled out, the agency is informed of the tests “, said the president of Microsoft. This call for a licensing regime for advanced AI models was seen by critics as an effort by Microsoft and OpenAI to prevent smaller competitors from catching up.
Microsoft also requires developers of powerful AI models to “be aware of the cloud” where their models are deployed and accessed in an effort to manage cybersecurity risks surrounding their technology.
Smith also wants disclosure rules around AI-generated content to prevent the spread of misinformation — another goal often stated by some leading voices in Congress, including Rep. Nancy Mace (RS.C.).
While Smith said his company fits into “every layer” of the AI ecosystem, he said his new regulatory proposal “isn’t just for big companies like Microsoft.” For example, Smith suggested that startups and small tech companies will still play a key role in developing AI-enabled apps.