“We believe that this work should not be rushed to the detriment of its successful completion,” wrote the six lawmakers, including the president of the House of Science. Frank Lucas (R-Okla.), Ranking Member Zoe Lofgren (D-Calif.) and key subcommittee leaders.
NIST, a discrete agency housed within the Commerce Department, has played a central role in President Joe Biden’s AI plans. The White House tasked NIST with creating the AI Safety Institute in its October executive order on AI, and the agency released an influential framework to help organizations manage AI risks earlier this year.
But NIST is also notoriously under-resourced and will almost certainly need help from outside researchers to fulfill its growing AI mandate.
NIST has not publicly disclosed which groups it intends to award research grants to through the AI Safety Institute, and the House Science letter does not identify the organizations by name. But one of them is RAND, according to an AI researcher and an AI policy professional at a major tech company, who are each familiar with the situation.
A recent RAND report on the biosecurity risks posed by advanced AI models is cited in the House letter’s footnotes as a worrying example of research that has not been adequately investigated. academic peer review.
After the article was published Tuesday, RAND spokeswoman Erin Dick said the House committee misinterpreted the think tank’s report on AI and biosecurity. Dick asserted that the report cited in the letter “went through the same rigorous quality assurance process as all RAND reports, including peer review,” and that all research cited in the report was also peer-reviewed.
The RAND spokesperson did not respond otherwise when asked about partnering on AI security research with NIST.
Lucas spokeswoman Heather Vaughan said NIST staff informed committee staff on Nov. 2 — three days after Biden signed the AI executive order — that the agency intended to awarding AI security research grants to two outside groups without any apparent competition, public publication, or notice of a funding opportunity. She said lawmakers became increasingly concerned when those plans were not mentioned during a NIST public listening session held Nov. 17 to discuss the AI Safety Institute, or during a congressional staff briefing on December 11.
Vaughan neither confirmed nor denied that RAND is one of the organizations referenced by the committee, nor did he identify the other group with which NIST told committee staff it planned to partner in research on AI security. A spokesperson for Lofgren declined to comment.
RAND’s emerging partnership with NIST follows its work on Biden’s executive order on AI, which was drafted with extensive input from senior RAND officials. The venerable think tank is facing increasing scrutiny – including internally – for receiving more than $15 million in AI and biosecurity grants earlier this year from Open Philanthropy , a prolific funder of effective altruistic causes funded by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz.
Many AI and biosecurity researchers say effective altruists, including RAND CEO Jason Matheny and senior information scientist Jeff Alstott, unduly emphasize the potential catastrophic risks posed by AI and biotechnology. Researchers say these risks are largely unsupported by evidence and warn that the movement’s ties to the biggest AI companies suggest an effort to neutralize competing companies or distract regulators from harm of AI.
“A lot of people are asking, ‘How is RAND still able to make breakthroughs by taking the Open (Philanthropy) money and now getting money (from the U.S. government) to do it?’ said the AI policy professional, who was granted anonymity due to the sensitivity of the topic.
In the letter, House lawmakers warned NIST that “scientific merit and transparency must remain a primary consideration” and that they expect the agency to “require recipients of federal research funding on AI safety to respect the same rigorous scientific and methodological guidelines. quality that characterizes the entire federal research enterprise.
A NIST spokesperson said the science agency is “exploring options for a competitive process to support cooperative research opportunities” related to the AI Safety Institute, adding that “no decisions have been made.” .
The spokesperson did not say whether NIST staff had informed House Science staff during a Nov. 2 briefing that the agency intended to partner with RAND on research into the security of AI. The spokesperson said NIST “maintains scientific independence in all of its work” and “will carry out its (AI executive order) responsibilities in an open and transparent manner.”
The AI researcher and AI policy professional says lawmakers and House Science Committee staff are concerned about NIST’s choice to partner with RAND, given the think tank’s affiliation with Open Philanthropy and the growing attention to the existential risks of AI.
“The House Science Committee is truly dedicated to the science of metrics,” the AI policy professional said. “And (the existential risk community) doesn’t respond to the science of measurement. They do not use any reference points.
Rumman Chowdhury, an AI researcher and co-founder of the nonprofit technology organization Humane Intelligence, said the committee’s letter suggests that Congress is starting to realize “how important the measure is” when deciding how regulate AI.
“There is not only hype about AI, there is also hype about AI governance,” Chowdhury wrote in an email. She said the House letter suggests that Capitol Hill is becoming aware of “ideological and political perspectives wrapped in scientific language in an effort to understand how ‘AI governance’ is defined — based on what we decide to be the most important thing to measure and consider. For.”